heat-10.0.0/0000775000175100017510000000000013245512113012532 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/0000775000175100017510000000000013245512113015223 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/notes/0000775000175100017510000000000013245512113016353 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/notes/add-template-dir-config-b96392a9e116a2d3.yaml0000666000175100017510000000042113245511546025764 0ustar zuulzuul00000000000000--- features: - Add `template_dir` to config. Normally heat has template directory `/etc/heat/templates`. This change makes it more official. In the future, it is possible to implement features like access templates directly from global template environment. heat-10.0.0/releasenotes/notes/add-list-concat-unique-function-5a87130d9c93cb08.yaml0000666000175100017510000000033513245511546027507 0ustar zuulzuul00000000000000--- features: - The list_concat_unique function was added, which behaves identically to the function ``list_concat`` to concat several lists using python's extend function and make sure without repeating items. heat-10.0.0/releasenotes/notes/add-contains-function-440aa7184a07758c.yaml0000666000175100017510000000027313245511546025515 0ustar zuulzuul00000000000000--- features: - The 'contains' function was added, which checks whether the specified value is in a sequence. In addition, the new function can be used as a condition function. heat-10.0.0/releasenotes/notes/subnet-pool-resource-c32ff97d4f956b73.yaml0000666000175100017510000000063213245511546025606 0ustar zuulzuul00000000000000--- features: - A new ``OS::Neutron:SubnetPool`` resource that helps in managing the lifecycle of neutron subnet pool. Availability of this resource depends on availability of neutron ``subnet_allocation`` API extension. - Resource ``OS::neutron::Subnet`` now supports ``subnetpool`` optional property, that will automate the allocation of CIDR for the subnet from the specified subnet pool.heat-10.0.0/releasenotes/notes/bp-update-cinder-resources-e23e62762f167d29.yaml0000666000175100017510000000030213245511546026466 0ustar zuulzuul00000000000000--- features: - OS::Cinder::QoSAssociation resource plugin is added to support cinder QoS Specs Association with Volume Types, which is provided by cinder ``qos-specs`` API extension. heat-10.0.0/releasenotes/notes/set-tags-for-port-471155bb53436361.yaml0000666000175100017510000000012213245511546024451 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Port resource. heat-10.0.0/releasenotes/notes/glance-image-tag-6fa123ca30be01aa.yaml0000666000175100017510000000021113245511546024647 0ustar zuulzuul00000000000000--- features: - OS::Glance::Image resource plug-in is updated to support tagging when image is created or updated as part of stack.heat-10.0.0/releasenotes/notes/sahara-job-resource-84aecc11fdf1d5af.yaml0000666000175100017510000000050313245511546025614 0ustar zuulzuul00000000000000--- features: - A new resource ``OS::Sahara::Job`` has been added, which allows to create and launch sahara jobs. Job can be launched with resource-signal. - Custom constraints for all sahara resources added - sahara.cluster, sahara.cluster_template, sahara.data_source, sahara.job_binary, sahara.job_type. heat-10.0.0/releasenotes/notes/system-random-string-38a14ae2cb6f4a24.yaml0000666000175100017510000000026113245511546025641 0ustar zuulzuul00000000000000--- security: - | Heat no longer uses standard Python RNG when generating values for OS::Heat::RandomString resource, and instead relies on system's RNG for that. heat-10.0.0/releasenotes/notes/remove-cloudwatch-api-149403251da97b41.yaml0000666000175100017510000000040213245511546025433 0ustar zuulzuul00000000000000--- upgrade: - | The AWS compatible CloudWatch API, deprecated since long has been finally removed. OpenStack deployments, packagers, and deployment projects which deploy/package CloudWatch should take appropriate action to remove support. heat-10.0.0/releasenotes/notes/bp-support-conditions-1a9f89748a08cd4f.yaml0000666000175100017510000000166313245511546025776 0ustar zuulzuul00000000000000--- features: - Adds optional section ``conditions`` for hot template ( heat_template_version.2016-10-14) and ``Conditions`` for cfn template (AWSTemplateFormatVersion.2010-09-09). - Adds some condition functions, like ``equals``, ``not``, ``and`` and ``or``, these condition functions can be used in ``conditions`` section to define one or more conditions which are evaluated based on input parameter values provided when a user creates or updates a stack. - Adds optional section ``condition`` for resource and output definitions. Condition name defined in ``conditions`` and condition functions can be referenced in this section, in order to conditionally create resources or conditionally give outputs of a stack. - Adds function ``if`` to return corresponding value based on condition evaluation. This function can be used to conditionally set the value of resource properties and outputs. heat-10.0.0/releasenotes/notes/policy-in-code-124372f6cdb0a497.yaml0000666000175100017510000000126013245511546024216 0ustar zuulzuul00000000000000--- features: - | Heat now support policy in code, which means if you didn't modify any of policy rules, you won't need to add rules in the `policy.yaml` or `policy.json` file. Because from now, heat keeps all default policies under `heat/policies`. You can still generate and modify a `policy.yaml` file which will override policy rules in code if those rules appear in the `policy.yaml` file. upgrade: - | Default policy.json file is now removed as we now generate the default policies in code. Please be aware that when using that file in your environment. You still can generate a `policy.yaml` file if that's required in your environment. heat-10.0.0/releasenotes/notes/bp-support-rbac-policy-fd71f8f6cc97bfb6.yaml0000666000175100017510000000041113245511546026242 0ustar zuulzuul00000000000000--- features: - OS::Neutron::RBACPolicy resource plugin is added to support RBAC policy, which is used to manage RBAC policy in Neutron. This resource creates and manages Neutron RBAC policy, which allows to share Neutron networks to subsets of tenants. heat-10.0.0/releasenotes/notes/give-me-a-network-67e23600945346cd.yaml0000666000175100017510000000103413245511546024506 0ustar zuulzuul00000000000000--- features: - New item key 'allocate_network' of 'networks' with allowed values 'auto' and 'none' for OS::Nova::Server, to support 'Give Me a Network' nova feature. Specifying 'auto' would auto allocate a network topology for the project if there is no existing network available; Specifying 'none' means no networking will be allocated for the created server. This feature requires nova API micro version 2.37 or later and the ``auto-allocated-topology`` API is available in the Neutron networking service. heat-10.0.0/releasenotes/notes/environment_validate_template-fee21a03bb628446.yaml0000666000175100017510000000033713245511546027573 0ustar zuulzuul00000000000000--- features: - | The template validate API call now returns the Environment calculated by heat - this enables preview of the merged environment when using parameter_merge_strategy prior to creating the stack heat-10.0.0/releasenotes/notes/remove-SSLMiddleware-2f15049af559f26a.yaml0000666000175100017510000000041313245511546025343 0ustar zuulzuul00000000000000--- deprecations: - | The SSL middleware ``heat.api.middleware.ssl:SSLMiddleware`` that has been deprecated since 6.0.0 has now been removed, check your paste config and ensure it has been replaced by ``oslo_middleware.http_proxy_to_wsgi`` instead. heat-10.0.0/releasenotes/notes/cinder-quota-resource-f13211c04020cd0c.yaml0000666000175100017510000000034413245511546025571 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Cinder::Quota`` is added to manage cinder quotas. Cinder quotas are operational limits to projects on cinder block storage resources. These include gigabytes, snapshots, and volumes. heat-10.0.0/releasenotes/notes/server-ephemeral-bdm-v2-55e0fe2afc5d8b63.yaml0000666000175100017510000000066413245511546026172 0ustar zuulzuul00000000000000--- features: - OS::Nova::Server now supports ephemeral_size and ephemeral_format properties for block_device_mapping_v2 property. Property ephemeral_size is integer, that require flavor with ephemeral disk size greater that 0. Property ephemeral_format is string with allowed values ext2, ext3, ext4, xfs and ntfs for Windows guests; it is optional and if has no value, uses default, defined in nova config file. heat-10.0.0/releasenotes/notes/add-zun-container-c31fa5316237b13d.yaml0000666000175100017510000000055613245511546024722 0ustar zuulzuul00000000000000--- features: - | A new OS::Zun::Container resource is added that allows users to manage docker containers powered by Zun. This resource will have an 'addresses' attribute that contains various networking information including the neutron port id. This allows users to orchestrate containers with other networking resources (i.e. floating ip). heat-10.0.0/releasenotes/notes/octavia-resources-0a25720e16dfe55d.yaml0000666000175100017510000000200413245511546025113 0ustar zuulzuul00000000000000--- features: - Adds new resources for octavia lbaas service. - New resource ``OS::Octavia::LoadBalancer`` is added to create and manage Load Balancers which allow traffic to be directed between servers. - New resource ``OS::Octavia::Listener`` is added to create and manage Listeners which represent a listening endpoint for the Load Balancer. - New resource ``OS::Octavia::Pool`` is added to create and manage Pools which represent a group of nodes. Pools define the subnet where nodes reside, the balancing algorithm, and the nodes themselves. - New resource ``OS::Octavia::PoolMember`` is added to create and manage Pool members which represent a single backend node. - New resource ``OS::Octavia::HealthMonitor`` is added to create and manage Health Monitors which watch status of the Load Balanced servers. - New resource ``OS::Octavia::L7Policy`` is added to create and manage L7 Policies. - New resource ``OS::Octavia::L7Rule`` is added to create and manage L7 Rules. heat-10.0.0/releasenotes/notes/event-list-nested-depth-80081a2a8eefee1a.yaml0000666000175100017510000000137013245511546026275 0ustar zuulzuul00000000000000--- prelude: > Previously the event list REST API call only returned events for the specified stack even when that stack contained nested stack resources. This meant that fetching all nested events required an inefficient recursive client-side implementation. features: - The event list GET REST API call now has a different behaviour when the 'nested_depth' parameter is set to an integer greater than zero. The response will contain all events down to the requested nested depth. - When 'nested_depth' is set the response also includes an extra entry in the 'links' list with 'rel' set to 'root_stack'. This can be used by client side implementations to detect whether it is necessary to fall back to client-side recurisive event fetching. heat-10.0.0/releasenotes/notes/nova-quota-resource-84350f0467ce2d40.yaml0000666000175100017510000000021413245511546025240 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Nova::Quota`` is added to enable an admin to manage Compute service quotas for a specific project. heat-10.0.0/releasenotes/notes/template-validate-improvements-52ecf5125c9efeda.yaml0000666000175100017510000000073713245511546030051 0ustar zuulzuul00000000000000--- features: - Template validation is improved to ignore the given set of error codes. For example, heat will report template as invalid one, if it does not find any required OpenStack services in the cloud deployment and while authoring the template, user might wants to avoid such scenarios, so that (s)he could create valid template without bothering about run-time environments. Please refer the API documentation of validate template for more details.heat-10.0.0/releasenotes/notes/monasca-supported-71c5373282c3b338.yaml0000666000175100017510000000026713245511546024717 0ustar zuulzuul00000000000000--- features: - OS::Monasca::AlarmDefinition and OS::Monasca::Notification resource plug-ins are now supported by heat community as monasca became offcial OpenStack project.heat-10.0.0/releasenotes/notes/magnum-resource-update-0f617eec45ef8ef7.yaml0000666000175100017510000000156313245511546026244 0ustar zuulzuul00000000000000--- prelude: > Magnum recently changed terminology to more intuitively convey key concepts in order to align with industry standards. "Bay" is now "Cluster" and "BayModel" is now "ClusterTemplate". This release deprecates the old names in favor of the new. features: - OS::Magnum::Cluster resource plugin added to support magnum cluster feature, which is provided by magnum ``cluster`` API. - OS::Magnum::ClusterTemplate resource plugin added to support magnum cluster template feature, which is provided by magnum ``clustertemplates`` API. deprecations: - Magnum terminology deprecations * `OS::Magnum::Bay` is now deprecated, should use `OS::Magnum::Cluster` instead * `OS::Magnum::BayModel` is now deprecated, should use `OS::Magnum::ClusterTemplate` instead Deprecation warnings are printed for old usages. heat-10.0.0/releasenotes/notes/get-server-webmks-console-url-f7066a9e14429084.yaml0000666000175100017510000000023613245511546027071 0ustar zuulzuul00000000000000--- features: - Supports to get the webmks console url for OS::Nova::Server resource. And this requires nova api version equal or greater than 2.8. heat-10.0.0/releasenotes/notes/api-outputs-6d09ebf5044f51c3.yaml0000666000175100017510000000114713245511546023763 0ustar zuulzuul00000000000000--- features: - Added new functionality for showing and listing stack outputs without resolving all outputs during stack initialisation. - Added new API calls for showing and listing stack outputs ``/stack/outputs`` and ``/stack/outputs/output_key``. - Added using of new API in python-heatclient for ``output_show`` and ``output_list``. Now, if version of Heat API is 1.19 or above, Heat client will use API calls ``output_show`` and ``output_list`` instead of parsing of stack get response. If version of Heat API is lower than 1.19, outputs resolve in Heat client as well as before.heat-10.0.0/releasenotes/notes/add-hostname-hints-security_groups-to-container-d3b69ae4b6f71fc7.yaml0000666000175100017510000000020613245511546033170 0ustar zuulzuul00000000000000--- features: - | Added ``hostname``, ``hints``, ``security_groups``, and ``mounts`` properties to Zun Container resources. heat-10.0.0/releasenotes/notes/bp-mistral-new-resource-type-workflow-execution-748bd37faa3e427b.yaml0000666000175100017510000000036213245511546033070 0ustar zuulzuul00000000000000--- features: - | A new OS::Mistral::ExternalResource is added that allows users to manage resources that are not known to Heat by specifying in the template Mistral workflows to handle actions such as create, update and delete.heat-10.0.0/releasenotes/notes/mark-combination-alarm-as-placeholder-resource-e243e9692cab52e0.yaml0000666000175100017510000000074113245511546032522 0ustar zuulzuul00000000000000--- critical: - Since Aodh drop support for combination alarm, therefore OS::Aodh::CombinationAlarm is now mark as hidden resource with directly inheriting from None resource which will make the resource do nothing when handling any actions (other than delete). And please don't use it. Old resource which created with that resource type still able to delete. It's recommand to switch that resource type ASAP, since we will remove that resource soon. heat-10.0.0/releasenotes/notes/senlin-resources-71c856dc62d0b407.yaml0000666000175100017510000000146413245511546024712 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Senlin::Cluster`` is added to create a cluster in senlin. A cluster is a group of homogeneous nodes. - New resource ``OS::Senlin::Node`` is added to create a node in senlin. Node represents a physical object exposed by other OpenStack services. - New resource ``OS::Senlin::Receiver`` is added to create a receiver in senlin. Receiver can be used to hook the engine to some external event/alarm sources. - New resource ``OS::Senlin::Profile`` is added to create a profile in senlin. Profile is a module used for creating nodes, it's the definition of a node. - New resource ``OS::Senlin::Policy`` is added to create a policy in senlin. Policy is a set of rules that can be checked and/or enforced when an Action is performed on a Cluster. heat-10.0.0/releasenotes/notes/converge-flag-for-stack-update-e0e92a7fe232f10f.yaml0000666000175100017510000000050113245511546027425 0ustar zuulzuul00000000000000--- features: - Add `converge` parameter for stack update (and update preview) API. This parameter will force resources to observe the reality of resources before actually update it. The value of this parameter can be any boolean value. This will replace config flag `observe_on_update` in near future. heat-10.0.0/releasenotes/notes/change-heat-keystone-user-name-limit-to-255-bd076132b98744be.yaml0000666000175100017510000000025513245511546031355 0ustar zuulzuul00000000000000--- other: - Now heat keystone user name charaters limit increased from 64 to 255. Any extra charaters will lost when truncate the name to the last 255 charaters. heat-10.0.0/releasenotes/notes/bp-support-host-aggregate-fbc4097f4e6332b8.yaml0000666000175100017510000000044013245511546026505 0ustar zuulzuul00000000000000--- features: - OS::Nova::HostAggregate resource plugin is added to support host aggregate, which is provided by nova ``aggregates`` API extension. - nova.host constraint is added to support to validate host attribute which is provided by nova ``host`` API extension. heat-10.0.0/releasenotes/notes/zaqar-notification-a4d240bbf31b7440.yaml0000666000175100017510000000042013245511546025243 0ustar zuulzuul00000000000000--- features: - New ``OS::Zaqar::Subscription`` and ``OS::Zaqar::MistralTrigger`` resource types allow users to attach to Zaqar queues (respectively) notifications in general, and notifications that trigger Mistral workflow executions in particular. heat-10.0.0/releasenotes/notes/configurable-server-name-limit-947d9152fe9b43ee.yaml0000666000175100017510000000037413245511546027511 0ustar zuulzuul00000000000000--- features: - Adds new 'max_server_name_length' configuration option which defaults to the prior upper bound (53) and can be lowered by users (if they need to, for example due to ldap or other internal name limit restrictions). heat-10.0.0/releasenotes/notes/external-resources-965d01d690d32bd2.yaml0000666000175100017510000000057413245511546025244 0ustar zuulzuul00000000000000--- prelude: > Support external resource reference in template. features: - Add `external_id` attribute for resource to reference on an exists external resource. The resource (with `external_id` attribute) will not able to be updated. This will keep management rights stay externally. - This feature only supports templates with version over `2016-10-14`. heat-10.0.0/releasenotes/notes/set-tags-for-subnetpool-d86ca0d7e35a05f1.yaml0000666000175100017510000000013013245511546026237 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::SubnetPool resource. heat-10.0.0/releasenotes/notes/make_url-function-d76737adb1e54801.yaml0000666000175100017510000000031413245511546025030 0ustar zuulzuul00000000000000--- features: - The Pike version of HOT (2017-09-01) adds a make_url function to simplify combining data from different sources into a URL with correct handling for escaping and IPv6 addresses. heat-10.0.0/releasenotes/notes/deployment-swift-data-server-property-51fd4f9d1671fc90.yaml0000666000175100017510000000055113245511546031107 0ustar zuulzuul00000000000000--- features: - A new property, deployment_swift_data is added to the OS::Nova::Server and OS::Heat::DeployedServer resources. The property is used to define the Swift container and object name that is used for deployment data for the server. If unset, the fallback is the previous behavior where these values will be automatically generated. heat-10.0.0/releasenotes/notes/parameter-group-for-nested-04559c4de34e326a.yaml0000666000175100017510000000017513245511546026567 0ustar zuulzuul00000000000000--- features: - ParameterGroups section is added to the nested stacks, for the output of the stack validate templates. heat-10.0.0/releasenotes/notes/stack-definition-in-functions-3f7f172a53edf535.yaml0000666000175100017510000000125013245511546027337 0ustar zuulzuul00000000000000--- other: - Intrinsic function plugins will now be passed a StackDefinition object instead of a Stack object. When accessing resources, the StackDefinition will return ResourceProxy objects instead of Resource objects. These classes replicate the parts of the Stack and Resource APIs that are used by the built-in Function plugins, but authors of custom third-party Template/Function plugins should audit them to ensure they do not depend on unstable parts of the API that are no longer accessible. The StackDefinition and ResourceProxy APIs are considered stable and any future changes to them will go through the standard deprecation process. heat-10.0.0/releasenotes/notes/hidden-heat-harestarter-resource-a123479c317886a3.yaml0000666000175100017510000000125213245511546027573 0ustar zuulzuul00000000000000--- upgrade: - | The ``OS::Heat::HARestarter`` resource type is no longer supported. This resource type is now hidden from the documentation. HARestarter resources in stacks, including pre-existing ones, are now only placeholders and will no longer do anything. The recommended alternative is to mark a resource unhealthy and then do a stack update to replace it. This still correctly manages dependencies but, unlike HARestarter, also avoid replacing dependent resources unnecessarily. An example of this technique can be seen in the autohealing sample templates at https://git.openstack.org/cgit/openstack/heat-templates/tree/hot/autohealing heat-10.0.0/releasenotes/notes/add-list_concat-function-c28563ab8fb6362e.yaml0000666000175100017510000000016613245511546026351 0ustar zuulzuul00000000000000--- features: - The list_concat function was added, which concats several lists using python's extend function. heat-10.0.0/releasenotes/notes/deprecate-nova-floatingip-resources-d5c9447a199be402.yaml0000666000175100017510000000053413245511546030455 0ustar zuulzuul00000000000000--- deprecations: - nova-network is no longer supported in OpenStack. Please use OS::Neutron::FloatingIPAssociation and OS::Neutron::FloatingIP in place of OS::Nova::FloatingIPAssociation and OS::Nova::FloatingIP - The AWS::EC2::EIP domain is always assumed to be 'vpc', since nova-network is not supported in OpenStack any longer. heat-10.0.0/releasenotes/notes/legacy-stack-user-id-cebbad8b0f2ed490.yaml0000666000175100017510000000060513245511546025676 0ustar zuulzuul00000000000000--- upgrade: - If upgrading with pre-icehouse stacks which contain resources that create users (such as OS::Nova::Server, OS::Heat::SoftwareDeployment, and OS::Heat::WaitConditionHandle), it is possible that the users will not be removed upon stack deletion due to the removal of a legacy fallback code path. In such a situation, these users will require manual removal. heat-10.0.0/releasenotes/notes/keystone-project-allow-get-attribute-b382fe97694e3987.yaml0000666000175100017510000000020113245511546030554 0ustar zuulzuul00000000000000--- fixes: - Add attribute schema to `OS::Keystone::Project`. This allow get_attr function can work with project resource. heat-10.0.0/releasenotes/notes/server-add-user-data-update-policy-c34646acfaada4d4.yaml0000666000175100017510000000056513245511546030400 0ustar zuulzuul00000000000000--- features: - The OS::Nova::Server now supports a new property user_data_update_policy, which may be set to either 'REPLACE' (default) or 'IGNORE' if you wish to allow user_data updates to be ignored on stack update. This is useful when managing a group of servers where changed user_data should apply to new servers without replacing existing servers. heat-10.0.0/releasenotes/notes/parameter-tags-148ef065616f92fc.yaml0000666000175100017510000000017113245511546024334 0ustar zuulzuul00000000000000--- features: - | Added a new schema property tags, to parameters, to categorize parameters based on features. heat-10.0.0/releasenotes/notes/add-cephfs-share-protocol-033e091e7c6c5166.yaml0000666000175100017510000000015113245511546026260 0ustar zuulzuul00000000000000--- fixes: - | 'CEPHFS' can be used as a share protocol when using OS::Manila::Share resource. heat-10.0.0/releasenotes/notes/keystone-domain-support-e06e2c65c5925ae5.yaml0000666000175100017510000000017213245511546026312 0ustar zuulzuul00000000000000--- features: - A new resource plugin ``OS::Keystone::Domain`` is added to support the lifecycle of keystone domain.heat-10.0.0/releasenotes/notes/set-tags-for-subnet-17a97b88dd11de63.yaml0000666000175100017510000000012413245511546025304 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Subnet resource. heat-10.0.0/releasenotes/notes/force-delete-nova-instance-6ed5d7fbd5b6f5fe.yaml0000666000175100017510000000053713245511546027113 0ustar zuulzuul00000000000000--- fixes: - Force delete the nova instance. If a resource is related with a nova instance which is in 'SOFT_DELETED' status, the resource can't be deleted, when nova config 'reclaim_instance_interval'. so, force-delete the nova instance, and then all the resources are related with the instance would be processed properly. heat-10.0.0/releasenotes/notes/remove-heat-resourcetype-constraint-b679618a149fc04e.yaml0000666000175100017510000000017413245511546030546 0ustar zuulzuul00000000000000--- deprecations: - The heat.resource_type custom constraint has been removed. This constraint never actually worked. heat-10.0.0/releasenotes/notes/set-networks-for-trove-cluster-b997a049eedbad17.yaml0000666000175100017510000000012613245511546027701 0ustar zuulzuul00000000000000--- features: - Allow to set networks of instances for OS::Trove::Cluster resource. heat-10.0.0/releasenotes/notes/cinder-qos-specs-resource-ca5a237ebc114729.yaml0000666000175100017510000000035513245511546026466 0ustar zuulzuul00000000000000--- features: - OS::Cinder::QoSSpecs resource plugin added to support cinder QoS Specs, which is provided by cinder ``qos-specs`` API extension. - cinder.qos_specs constraint added to support to validate QoS Specs attribute. heat-10.0.0/releasenotes/notes/fix-attachments-type-c5b6fb5b4c2bcbfe.yaml0000666000175100017510000000034413245511546026120 0ustar zuulzuul00000000000000--- deprecations: - | The 'attachments' attribute of OS::Cinder::Volume has been deprecated in favor of 'attachments_list', which has the correct type of LIST. This makes this data easier for end users to process. heat-10.0.0/releasenotes/notes/server-side-multi-env-7862a75e596ae8f5.yaml0000666000175100017510000000064413245511546025607 0ustar zuulzuul00000000000000--- features: - Multiple environment files may be passed to the server in the files dictionary along with an ordered list of the environment file names. The server will generate the stack's environment from the provided files rather than requiring the client to merge the environments together. This is optional; the existing interface to pass in the already resolved environment is still present. heat-10.0.0/releasenotes/notes/environment-merging-d623362fac1279f7.yaml0000666000175100017510000000067113245511546025411 0ustar zuulzuul00000000000000--- prelude: > Previously 'parameters' and 'parameter_defaults' specified in an environment file used to overwrite their existing values. features: - A new 'parameter_merge_strategies' section can be added to the environment file, where 'default' and/or parameter specific merge strategies can be specified. - Parameters and parameter defaults specified in the environment file would be merged as per their specified strategies. heat-10.0.0/releasenotes/notes/know-limit-releasenote-4d21fc4d91d136d9.yaml0000666000175100017510000000047013245511546026067 0ustar zuulzuul00000000000000--- issues: - | Heat does not work with keystone identity federation. This is a known limitation as heat uses keystone trusts for deferred authentication and trusts don't work with federated keystone. For more details check `https://etherpad.openstack.org/p/pike-ptg-cross-project-federation`. heat-10.0.0/releasenotes/notes/designate-v2-support-0f889e9ad13d4aa2.yaml0000666000175100017510000000035113245511546025552 0ustar zuulzuul00000000000000--- features: - Designate v2 resource plugins OS::Designate::Zone and OS::Designate::RecordSet are newly added. deprecations: - Designate v1 resource plugins OS::Designate::Domain and OS::Designate::Record are deprecated.heat-10.0.0/releasenotes/notes/yaql-function-4895e39555c2841d.yaml0000666000175100017510000000026113245511546024057 0ustar zuulzuul00000000000000--- features: - Add ``yaql`` function, that takes 2 arguments ``expression`` of type string and ``data`` of type map and evaluates ``expression`` on a given ``data``. heat-10.0.0/releasenotes/notes/bp-support-neutron-qos-3feb38eb2abdcc87.yaml0000666000175100017510000000101113245511546026363 0ustar zuulzuul00000000000000--- features: - OS::Neutron::QoSPolicy resource plugin is added to support QoS policy, which is provided by neutron ``qos`` API extension. - OS::Neutron::QoSBandwidthLimitRule resource plugin is added to support neutron QoS bandwidth limit rule, which is provided by neutron ``qos`` API extension. - Resources ``OS::Neutron::Port`` and ``OS::Neutron::Net`` now support ``qos_policy`` optional property, that will associate with QoS policy to offer different service levels based on the policy rules. heat-10.0.0/releasenotes/notes/cancel_without_rollback-e5d978a60d9baf45.yaml0000666000175100017510000000013213245511546026440 0ustar zuulzuul00000000000000--- features: Adds REST api support to cancel a stack create/update without rollback. heat-10.0.0/releasenotes/notes/cinder-backup-cb72e775681fb5a5.yaml0000666000175100017510000000071413245511546024210 0ustar zuulzuul00000000000000--- upgrade: - New config section ``volumes`` with new config option ``[volumes]backups_enabled`` (defaults to ``True``). Operators that do not have Cinder backup service deployed in their cloud are encouraged to set this option to ``False``. fixes: - Allow to configure Heat service to forbid creation of stacks containing Volume resources with ``deletion_policy`` set to ``Snapshot`` when there is no Cinder backup service available. heat-10.0.0/releasenotes/notes/neutron-quota-resource-7fa5e4df8287bf77.yaml0000666000175100017510000000013113245511546026235 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Neutron::Quota`` is added to manage neutron quotas. heat-10.0.0/releasenotes/notes/dns-resolution-5afc1c57dfd05aff.yaml0000666000175100017510000000040113245511546024746 0ustar zuulzuul00000000000000--- features: - Supports internal DNS resolution and integration with external DNS services for neutron resources. Template authors can use the ``dns_name`` and ``dns_domain`` properties of neutron resource plugins for this functionality. heat-10.0.0/releasenotes/notes/doc-migrate-10c968c819848240.yaml0000666000175100017510000000031213245511546023370 0ustar zuulzuul00000000000000--- features: - | All developer, contributor, and user content from various guides in openstack-manuals has been moved in-tree and are published at `https://docs.openstack.org/heat/pike/`.heat-10.0.0/releasenotes/notes/store-resource-attributes-8bcbedca2f86986e.yaml0000666000175100017510000000110013245511546027062 0ustar zuulzuul00000000000000--- features: - Resource attributes are now stored at the time a resource is created or updated, allowing for fast resolution of outputs without having to retrieve live data from the underlying physical resource. To minimise compatibility problems, the behaviour of the `show` attribute, the `with_attr` option to the resource show API, and stacks that do not yet use the convergence architecture (due to the convergence_engine being disabled at the time they were created) is unchanged - in each of these cases live data will still be returned. heat-10.0.0/releasenotes/notes/add-zun-client-plugin-dfc10ecd1a6e98be.yaml0000666000175100017510000000020313245511546026073 0ustar zuulzuul00000000000000--- other: - | Introduce a Zun client plugin module that will be used by the Zun's resources that are under development. heat-10.0.0/releasenotes/notes/repeat-support-setting-permutations-fbc3234166b529ca.yaml0000666000175100017510000000072213245511546030657 0ustar zuulzuul00000000000000--- features: - Added new section ``permutations`` for ``repeat`` function, to decide whether to iterate nested the over all the permutations of the elements in the given lists. If 'permutations' is not specified, we set the default value to true to compatible with before behavior. The args have to be lists instead of dicts if 'permutations' is False because keys in a dict are unordered, and the list args all have to be of the same length. heat-10.0.0/releasenotes/notes/resource-search-3234afe601ea4e9d.yaml0000666000175100017510000000043513245511546024636 0ustar zuulzuul00000000000000--- features: - A stack can be searched for resources based on their name, status, type, action, id and physcial_resource_id. And this feature is enabled both in REST API and CLI. For more details, please refer orchestration API document and heat CLI user guide. heat-10.0.0/releasenotes/notes/add-tags-for-neutron-router-43d72e78aa89fd07.yaml0000666000175100017510000000012413245511546026756 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Router resource. heat-10.0.0/releasenotes/notes/monasca-period-f150cdb134f1e036.yaml0000666000175100017510000000116113245511546024351 0ustar zuulzuul00000000000000--- features: - Add optional 'period' property for Monasca Notification resource. The new added property will now allow the user to tell Monasca the interval in seconds to periodically invoke a webhook until the ALARM state transitions back to an OK state or vice versa. This is useful when the user wants to create a stack which will automatically scale up or scale down more than once if the alarm state continues to be in the same state. To conform to the existing Heat autoscaling behaviour, we manually create the monasca notification resource in Heat with a default interval value of 60. heat-10.0.0/releasenotes/notes/neutron-lbaas-v2-resources-c0ebbeb9bc9f7a42.yaml0000666000175100017510000000174213245511546027100 0ustar zuulzuul00000000000000--- features: - New resources for Neutron Load Balancer version 2. These are unique for version 2 and do not support or mix with existing version 1 resources. - New resource ``OS::Neutron::LBaaS::LoadBalancer`` is added to create and manage Load Balancers which allow traffic to be directed between servers. - New resource ``OS::Neutron::LBaaS::Listener`` is added to create and manage Listeners which represent a listening endpoint for the Load Balancer. - New resource ``OS::Neutron::LBaaS::Pool`` is added to create and manage Pools which represent a group of nodes. Pools define the subnet where nodes reside, the balancing algorithm, and the nodes themselves. - New resource ``OS::Neutron::LBaaS::PoolMember`` is added to create and manage Pool members which represent a single backend node. - New resource ``OS::Neutron::LBaaS::HealthMonitor`` is added to create and manage Health Monitors which watch status of the Load Balanced servers. heat-10.0.0/releasenotes/notes/immutable-parameters-a13dc9bec7d6fa0f.yaml0000666000175100017510000000100513245511546026100 0ustar zuulzuul00000000000000--- features: - Adds a new "immutable" boolean field to the parameters section in a HOT template. This gives template authors the ability to mark template parameters as immutable to restrict updating parameters which have destructive effects on the application. A value of True results in the engine rejecting stack-updates that include changes to that parameter. When not specified in the template, "immutable" defaults to False to ensure backwards compatibility with old templates. heat-10.0.0/releasenotes/notes/add-aodh-composite-alarm-f8eb4f879fe0916b.yaml0000666000175100017510000000030313245511546026322 0ustar zuulzuul00000000000000--- features: - OS::Aodh::CompositeAlarm resource plugin is added to manage Aodh composite alarm, aim to replace OS::Aodh::CombinationAlarm which has been deprecated in Newton release. heat-10.0.0/releasenotes/notes/restrict_update_replace-68abece58cf3f6a0.yaml0000666000175100017510000000043313245511546026607 0ustar zuulzuul00000000000000--- features: - Adds a new feature to restrict update or replace of a resource when a stack is being updated. Template authors can set ``restricted_actions`` in the ``resources`` section of ``resource_registry`` in an environment file to restrict update or replace.heat-10.0.0/releasenotes/notes/sync-queens-releasenote-13f68851f7201e37.yaml0000666000175100017510000000215113245511546026031 0ustar zuulzuul00000000000000--- prelude: | Note that Heat is compatible with OpenStack Identity federation, even when using Keystone trusts. It should work after you enable Federation and build the `auto-provisioning map`_ with the heat service user in Keystone. Auto-provisioning has been available in Keystone since the Ocata release. .. _auto-provisioning map: https://docs.openstack.org/keystone/latest/advanced-topics/federation/federated_identity.html#auto-provisioning other: - | The Heat plugin in Horizon has been replaced with a new stand-alone Horizon plugin, heat-dashboard. You can see more detail in the heat-dashboard repository (https://git.openstack.org/cgit/openstack/heat-dashboard). - | The old Heat Tempest plugin ``heat_tests`` has been removed and replaced by a separate Tempest plugin named ``heat``, in the heat-tempest-plugin repository (https://git.openstack.org/cgit/openstack/heat-tempest-plugin). Functional tests that are appropriate for the Tempest environment have been migrated to the new plugin. Other functional tests remain behind in the heat repository. ././@LongLink0000000000000000000000000000015100000000000011212 Lustar 00000000000000heat-10.0.0/releasenotes/notes/project-tags-orchestration-If9125519e35f9f95ea8343cb07c377de9ccf5edf.yamlheat-10.0.0/releasenotes/notes/project-tags-orchestration-If9125519e35f9f95ea8343cb07c377de9ccf5edf.0000666000175100017510000000025713245511546031625 0ustar zuulzuul00000000000000--- features: - Add `tags` parameter for create and update keystone projects. Defined comma deliniated list will insert tags into newly created or updated projects. heat-10.0.0/releasenotes/notes/mark-unhealthy-phys-id-e90fd669d86963d1.yaml0000666000175100017510000000030713245511546025741 0ustar zuulzuul00000000000000--- features: - The ``resource mark unhealthy`` command now accepts either a logical resource name (as it did previously) or a physical resource ID to identify the resource to be marked unhealthy. heat-10.0.0/releasenotes/notes/.placeholder0000666000175100017510000000000013245511546020637 0ustar zuulzuul00000000000000heat-10.0.0/releasenotes/notes/support-rbac-for-qos-policy-a55434654e1dd953.yaml0000666000175100017510000000022113245511546026630 0ustar zuulzuul00000000000000--- features: - Support to managing rbac policy for 'qos_policy' resource, which allows to share Neutron qos policy to subsets of tenants. heat-10.0.0/releasenotes/notes/server-group-soft-policy-8eabde24bf14bf1d.yaml0000666000175100017510000000021013245511546026657 0ustar zuulzuul00000000000000--- features: - Two new policies soft-affinity and soft-anti-affinity have been supported for the OS::Nova::ServerGroup resource. heat-10.0.0/releasenotes/notes/neutron-address-scope-ce234763e22c7449.yaml0000666000175100017510000000062013245511546025556 0ustar zuulzuul00000000000000--- features: - | A new ``OS::Neutron:AddressScope`` resource that helps in managing the lifecycle of neutron address scope. Availability of this resource depends on availability of neutron ``address-scope`` API extension. This resource can be associated with multiple subnet pools in a one-to-many relationship. The subnet pools under an address scope must not overlap.heat-10.0.0/releasenotes/notes/event-transport-302d1db6c5a5daa9.yaml0000666000175100017510000000031113245511546024760 0ustar zuulzuul00000000000000--- features: - Added a new ``event-sinks`` element to the environment which allows specifying a target where events from the stack are sent. It supports the ``zaqar-queue`` element for now. heat-10.0.0/releasenotes/notes/keystone-region-ce3b435c73c81ce4.yaml0000666000175100017510000000016713245511546024673 0ustar zuulzuul00000000000000--- features: - A new ``OS::Keystone::Region`` resource that helps in managing the lifecycle of keystone region. heat-10.0.0/releasenotes/notes/hidden-designate-domain-record-res-d445ca7f1251b63d.yaml0000666000175100017510000000030713245511546030165 0ustar zuulzuul00000000000000--- deprecations: - | Hidden Designate resource plugins ``OS::Designate::Domain`` and ``OS::Designate::Record``. To use ``OS::Designate::Zone`` and ``OS::Designate::RecordSet`` instead.heat-10.0.0/releasenotes/notes/map-replace-function-26bf247c620f64bf.yaml0000666000175100017510000000041613245511546025501 0ustar zuulzuul00000000000000--- features: - Add ``map_replace`` function, that takes 2 arguments an input map and a map containing a ``keys`` and/or ``values`` map. key/value substitutions on the input map are performed based on the mappings passed in ``keys`` and ``values``. heat-10.0.0/releasenotes/notes/bp-support-trunk-port-733019c49a429826.yaml0000666000175100017510000000013313245511546025431 0ustar zuulzuul00000000000000--- features: - New resource ``OS::Neutron::Trunk`` is added to manage Neutron Trunks. heat-10.0.0/releasenotes/notes/deprecate-threshold-alarm-5738f5ab8aebfd20.yaml0000666000175100017510000000030713245511546026650 0ustar zuulzuul00000000000000--- deprecations: - Threshold alarm which uses ceilometer API is deprecated in aodh since Ocata. Please use ``OS::Aodh::GnocchiAggregationByResourcesAlarm`` in place of ``OS::Aodh::Alarm``.heat-10.0.0/releasenotes/notes/neutron-segment-support-a7d44af499838a4e.yaml0000666000175100017510000000106413245511546026352 0ustar zuulzuul00000000000000--- features: - A new ``openstack`` client plugin to use python-openstacksdk library and a ``neutron.segment`` custom constraint. - A new ``OS::Neutron:Segment`` resource to create routed networks. Availability of this resource depends on availability of neutron ``segment`` API extension. - Resource ``OS::Neutron::Subnet`` now supports ``segment`` optional property to specify a segment. - Resource ``OS::Neutron::Net`` now supports ``l2_adjacency`` atribute on whether L2 connectivity is available across the network or not. heat-10.0.0/releasenotes/notes/resource_group_removal_policies_mode-d489e0cc49942e2a.yaml0000666000175100017510000000035313245511546031163 0ustar zuulzuul00000000000000--- features: - | OS::Heat::ResourceGroup now supports a removal_policies_mode property. This can be used to optionally select different behavior on update where you may wish to overwrite vs append to the current policy. heat-10.0.0/releasenotes/notes/set-tags-for-network-resource-d6f3843c546744a2.yaml0000666000175100017510000000012113245511546027156 0ustar zuulzuul00000000000000--- features: - Allow to set or update the tags for OS::Neutron::Net resource. heat-10.0.0/releasenotes/notes/barbican-container-77967add0832d51b.yaml0000666000175100017510000000055413245511546025142 0ustar zuulzuul00000000000000--- features: - Add new ``OS::Barbican::GenericContainer`` resource for storing arbitrary barbican secrets. - Add new ``OS::Barbican::RSAContainer`` resource for storing RSA public keys, private keys, and private key pass phrases. - A new ``OS::Barbican::CertificateContainer`` resource for storing the secrets that are relevant to certificates. heat-10.0.0/releasenotes/source/0000775000175100017510000000000013245512113016523 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/source/newton.rst0000666000175100017510000000023213245511546020577 0ustar zuulzuul00000000000000=================================== Newton Series Release Notes =================================== .. release-notes:: :branch: origin/stable/newton heat-10.0.0/releasenotes/source/_static/0000775000175100017510000000000013245512113020151 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/source/_static/.placeholder0000666000175100017510000000000013245511546022435 0ustar zuulzuul00000000000000heat-10.0.0/releasenotes/source/liberty.rst0000666000175100017510000000022113245511546020735 0ustar zuulzuul00000000000000============================== Liberty Series Release Notes ============================== .. release-notes:: :branch: origin/stable/libertyheat-10.0.0/releasenotes/source/pike.rst0000666000175100017510000000021713245511546020220 0ustar zuulzuul00000000000000=================================== Pike Series Release Notes =================================== .. release-notes:: :branch: stable/pike heat-10.0.0/releasenotes/source/conf.py0000666000175100017510000002134013245511546020035 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # Heat Release Notes documentation build configuration file, created by # sphinx-quickstart on Tue Nov 3 17:40:50 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'openstackdocstheme', 'reno.sphinxext', ] # openstackdocstheme options repository_name = 'openstack/heat' bug_project = 'heat' bug_tag = '' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Heat Release Notes' copyright = u'2015, Heat Developers' # Release notes are version independent, no need to set version and release release = '' version = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'openstackdocs' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. html_last_updated_fmt = '%Y-%m-%d %H:%M' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'HeatReleaseNotesdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'HeatReleaseNotes.tex', u'Heat Release Notes Documentation', u'Heat Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'heatreleasenotes', u'Heat Release Notes Documentation', [u'Heat Developers'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'HeatReleaseNotes', u'Heat Release Notes Documentation', u'Heat Developers', 'HeatReleaseNotes', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False # -- Options for Internationalization output ------------------------------ locale_dirs = ['locale/'] heat-10.0.0/releasenotes/source/unreleased.rst0000666000175100017510000000015713245511546021422 0ustar zuulzuul00000000000000============================== Current Series Release Notes ============================== .. release-notes:: heat-10.0.0/releasenotes/source/index.rst0000666000175100017510000000023713245511554020400 0ustar zuulzuul00000000000000====================== Heat Release Notes ====================== .. toctree:: :maxdepth: 1 unreleased pike ocata newton mitaka liberty heat-10.0.0/releasenotes/source/mitaka.rst0000666000175100017510000000023213245511546020533 0ustar zuulzuul00000000000000=================================== Mitaka Series Release Notes =================================== .. release-notes:: :branch: origin/stable/mitaka heat-10.0.0/releasenotes/source/ocata.rst0000666000175100017510000000023013245511546020352 0ustar zuulzuul00000000000000=================================== Ocata Series Release Notes =================================== .. release-notes:: :branch: origin/stable/ocata heat-10.0.0/releasenotes/source/_templates/0000775000175100017510000000000013245512113020660 5ustar zuulzuul00000000000000heat-10.0.0/releasenotes/source/_templates/.placeholder0000666000175100017510000000000013245511546023144 0ustar zuulzuul00000000000000heat-10.0.0/contrib/0000775000175100017510000000000013245512113014172 5ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/0000775000175100017510000000000013245512113016442 5ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/README.md0000666000175100017510000000075513245511546017743 0ustar zuulzuul00000000000000Docker plugin for OpenStack Heat ================================ This plugin enable using Docker containers as resources in a Heat template. ### 1. Install the Docker plugin in Heat NOTE: These instructions assume the value of heat.conf plugin_dirs includes the default directory /usr/lib/heat. To install the plugin, from this directory run: sudo python ./setup.py install ### 2. Restart heat Only the process "heat-engine" needs to be restarted to load the new installed plugin. heat-10.0.0/contrib/heat_docker/heat_docker/0000775000175100017510000000000013245512113020712 5ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/__init__.py0000666000175100017510000000000013245511546023024 0ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/tests/0000775000175100017510000000000013245512113022054 5ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/tests/test_docker_container.py0000666000175100017510000005220713245511546027017 0ustar zuulzuul00000000000000# # Copyright (c) 2013 Docker, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils import testtools from heat_docker.resources import docker_container from heat_docker.tests import fake_docker_client as docker docker_container.docker = docker template = ''' { "AWSTemplateFormatVersion": "2010-09-09", "Description": "Test template", "Parameters": {}, "Resources": { "Blog": { "Type": "DockerInc::Docker::Container", "Properties": { "image": "samalba/wordpress", "env": [ "FOO=bar" ] } } } } ''' class DockerContainerTest(common.HeatTestCase): def setUp(self): super(DockerContainerTest, self).setUp() for res_name, res_class in docker_container.resource_mapping().items(): resource._register_class(res_name, res_class) self.addCleanup(self.m.VerifyAll) def create_container(self, resource_name): t = template_format.parse(template) self.stack = utils.parse_stack(t) resource = docker_container.DockerContainer( resource_name, self.stack.t.resource_definitions(self.stack)[resource_name], self.stack) self.m.StubOutWithMock(resource, 'get_client') resource.get_client().MultipleTimes().AndReturn( docker.Client()) self.assertIsNone(resource.validate()) self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) return resource def get_container_state(self, resource): client = resource.get_client() return client.inspect_container(resource.resource_id)['State'] def test_resource_create(self): container = self.create_container('Blog') self.assertTrue(container.resource_id) running = self.get_container_state(container)['Running'] self.assertIs(True, running) client = container.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertIsNone(client.container_create[0]['name']) def test_create_with_name(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['name'] = 'super-blog' resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) self.m.StubOutWithMock(resource, 'get_client') resource.get_client().MultipleTimes().AndReturn( docker.Client()) self.assertIsNone(resource.validate()) self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual('super-blog', client.container_create[0]['name']) @mock.patch.object(docker_container.DockerContainer, 'get_client') def test_create_failed(self, test_client): mock_client = mock.Mock() mock_client.inspect_container.return_value = { "State": { "ExitCode": -1 } } mock_client.logs.return_value = "Container startup failed" test_client.return_value = mock_client mock_stack = mock.Mock() mock_stack.has_cache_data.return_value = False mock_stack.db_resource_get.return_value = None res_def = mock.Mock(spec=rsrc_defn.ResourceDefinition) docker_res = docker_container.DockerContainer("test", res_def, mock_stack) exc = self.assertRaises(exception.ResourceInError, docker_res.check_create_complete, 'foo') self.assertIn("Container startup failed", six.text_type(exc)) def test_start_with_bindings_and_links(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['port_bindings'] = {'80/tcp': [{'HostPort': '80'}]} props['links'] = {'db': 'mysql'} resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) self.m.StubOutWithMock(resource, 'get_client') resource.get_client().MultipleTimes().AndReturn( docker.Client()) self.assertIsNone(resource.validate()) self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual({'db': 'mysql'}, client.container_start[0]['links']) self.assertEqual( {'80/tcp': [{'HostPort': '80'}]}, client.container_start[0]['port_bindings']) def test_resource_attributes(self): container = self.create_container('Blog') # Test network info attributes self.assertEqual('172.17.42.1', container.FnGetAtt('network_gateway')) self.assertEqual('172.17.0.3', container.FnGetAtt('network_ip')) self.assertEqual('1080', container.FnGetAtt('network_tcp_ports')) self.assertEqual('', container.FnGetAtt('network_udp_ports')) # Test logs attributes self.assertEqual('---logs_begin---', container.FnGetAtt('logs_head')) self.assertEqual('---logs_end---', container.FnGetAtt('logs_tail')) # Test a non existing attribute self.assertRaises(exception.InvalidTemplateAttribute, container.FnGetAtt, 'invalid_attribute') @testtools.skipIf(docker is None, 'docker-py not available') def test_resource_delete(self): container = self.create_container('Blog') scheduler.TaskRunner(container.delete)() self.assertEqual((container.DELETE, container.COMPLETE), container.state) exists = True try: self.get_container_state(container)['Running'] except docker.errors.APIError as error: if error.response.status_code == 404: exists = False else: raise self.assertIs(False, exists) self.m.VerifyAll() @testtools.skipIf(docker is None, 'docker-py not available') def test_resource_delete_exception(self): response = mock.MagicMock() response.status_code = 404 response.content = 'some content' container = self.create_container('Blog') self.m.StubOutWithMock(container.get_client(), 'kill') container.get_client().kill(container.resource_id).AndRaise( docker.errors.APIError('Not found', response)) self.m.StubOutWithMock(container, '_get_container_status') container._get_container_status(container.resource_id).AndRaise( docker.errors.APIError('Not found', response)) self.m.ReplayAll() scheduler.TaskRunner(container.delete)() self.m.VerifyAll() def test_resource_suspend_resume(self): container = self.create_container('Blog') # Test suspend scheduler.TaskRunner(container.suspend)() self.assertEqual((container.SUSPEND, container.COMPLETE), container.state) running = self.get_container_state(container)['Running'] self.assertIs(False, running) # Test resume scheduler.TaskRunner(container.resume)() self.assertEqual((container.RESUME, container.COMPLETE), container.state) running = self.get_container_state(container)['Running'] self.assertIs(True, running) def test_start_with_restart_policy_no(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['restart_policy'] = {'Name': 'no', 'MaximumRetryCount': 0} resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(resource.validate()) scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual({'Name': 'no', 'MaximumRetryCount': 0}, client.container_start[0]['restart_policy']) def test_start_with_restart_policy_on_failure(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['restart_policy'] = {'Name': 'on-failure', 'MaximumRetryCount': 10} resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(resource.validate()) scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual({'Name': 'on-failure', 'MaximumRetryCount': 10}, client.container_start[0]['restart_policy']) def test_start_with_restart_policy_always(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['restart_policy'] = {'Name': 'always', 'MaximumRetryCount': 0} resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(resource.validate()) scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual({'Name': 'always', 'MaximumRetryCount': 0}, client.container_start[0]['restart_policy']) def test_start_with_caps(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['cap_add'] = ['NET_ADMIN'] props['cap_drop'] = ['MKNOD'] resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(resource.validate()) scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual(['NET_ADMIN'], client.container_start[0]['cap_add']) self.assertEqual(['MKNOD'], client.container_start[0]['cap_drop']) def test_start_with_read_only(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['read_only'] = True resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(resource, 'get_client') get_client_mock.return_value = docker.Client() get_client_mock.return_value.set_api_version('1.17') self.assertIsNone(resource.validate()) scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) client = resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertIs(True, client.container_start[0]['read_only']) def arg_for_low_api_version(self, arg, value, low_version): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props[arg] = value my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() get_client_mock.return_value.set_api_version(low_version) msg = self.assertRaises(docker_container.InvalidArgForVersion, my_resource.validate) min_version = docker_container.MIN_API_VERSION_MAP[arg] args = dict(arg=arg, min_version=min_version) expected = _('"%(arg)s" is not supported for API version ' '< "%(min_version)s"') % args self.assertEqual(expected, six.text_type(msg)) def test_start_with_read_only_for_low_api_version(self): self.arg_for_low_api_version('read_only', True, '1.16') def test_compare_version(self): self.assertEqual(docker_container.compare_version('1.17', '1.17'), 0) self.assertEqual(docker_container.compare_version('1.17', '1.16'), -1) self.assertEqual(docker_container.compare_version('1.17', '1.18'), 1) def test_create_with_cpu_shares(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['cpu_shares'] = 512 my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(my_resource.validate()) scheduler.TaskRunner(my_resource.create)() self.assertEqual((my_resource.CREATE, my_resource.COMPLETE), my_resource.state) client = my_resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual(512, client.container_create[0]['cpu_shares']) def test_create_with_cpu_shares_for_low_api_version(self): self.arg_for_low_api_version('cpu_shares', 512, '1.7') def test_start_with_mapping_devices(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['devices'] = ( [{'path_on_host': '/dev/sda', 'path_in_container': '/dev/xvdc', 'permissions': 'r'}, {'path_on_host': '/dev/mapper/a_bc-d', 'path_in_container': '/dev/xvdd', 'permissions': 'rw'}]) my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(my_resource.validate()) scheduler.TaskRunner(my_resource.create)() self.assertEqual((my_resource.CREATE, my_resource.COMPLETE), my_resource.state) client = my_resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual(['/dev/sda:/dev/xvdc:r', '/dev/mapper/a_bc-d:/dev/xvdd:rw'], client.container_start[0]['devices']) def test_start_with_mapping_devices_also_with_privileged(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['devices'] = ( [{'path_on_host': '/dev/sdb', 'path_in_container': '/dev/xvdc', 'permissions': 'r'}]) props['privileged'] = True my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(my_resource.validate()) scheduler.TaskRunner(my_resource.create)() self.assertEqual((my_resource.CREATE, my_resource.COMPLETE), my_resource.state) client = my_resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertNotIn('devices', client.container_start[0]) def test_start_with_mapping_devices_for_low_api_version(self): value = ([{'path_on_host': '/dev/sda', 'path_in_container': '/dev/xvdc', 'permissions': 'rwm'}]) self.arg_for_low_api_version('devices', value, '1.13') def test_start_with_mapping_devices_not_set_path_in_container(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['devices'] = [{'path_on_host': '/dev/sda', 'permissions': 'rwm'}] my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(my_resource.validate()) scheduler.TaskRunner(my_resource.create)() self.assertEqual((my_resource.CREATE, my_resource.COMPLETE), my_resource.state) client = my_resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual(['/dev/sda:/dev/sda:rwm'], client.container_start[0]['devices']) def test_create_with_cpu_set(self): t = template_format.parse(template) self.stack = utils.parse_stack(t) definition = self.stack.t.resource_definitions(self.stack)['Blog'] props = t['Resources']['Blog']['Properties'].copy() props['cpu_set'] = '0-8,16-24,28' my_resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) get_client_mock = self.patchobject(my_resource, 'get_client') get_client_mock.return_value = docker.Client() self.assertIsNone(my_resource.validate()) scheduler.TaskRunner(my_resource.create)() self.assertEqual((my_resource.CREATE, my_resource.COMPLETE), my_resource.state) client = my_resource.get_client() self.assertEqual(['samalba/wordpress'], client.pulled_images) self.assertEqual('0-8,16-24,28', client.container_create[0]['cpuset']) def test_create_with_cpu_set_for_low_api_version(self): self.arg_for_low_api_version('cpu_set', '0-8,^2', '1.11') heat-10.0.0/contrib/heat_docker/heat_docker/tests/__init__.py0000666000175100017510000000000013245511546024166 0ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/tests/fake_docker_client.py0000666000175100017510000000734413245511546026244 0ustar zuulzuul00000000000000# # Copyright (c) 2013 Docker, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import random import string class APIError(Exception): def __init__(self, content, response): super(APIError, self).__init__(content) self.response = response errors = mock.MagicMock() errors.APIError = APIError class FakeResponse (object): def __init__(self, status_code=200, reason='OK'): self.status_code = status_code self.reason = reason self.content = reason class Client(object): def __init__(self, endpoint=None): self._endpoint = endpoint self._containers = {} self.pulled_images = [] self.container_create = [] self.container_start = [] self.version_info = {} def _generate_string(self, n=32): return ''.join(random.choice(string.ascii_lowercase) for i in range(n)) def _check_exists(self, container_id): if container_id not in self._containers: raise APIError( '404 Client Error: Not Found ("No such container: ' '{0}")'.format(container_id), FakeResponse(status_code=404, reason='No such container')) def _set_running(self, container_id, running): self._check_exists(container_id) self._containers[container_id] = running def inspect_container(self, container_id): self._check_exists(container_id) info = { 'Id': container_id, 'NetworkSettings': { 'Bridge': 'docker0', 'Gateway': '172.17.42.1', 'IPAddress': '172.17.0.3', 'IPPrefixLen': 16, 'Ports': { '80/tcp': [{'HostIp': '0.0.0.0', 'HostPort': '1080'}] } }, 'State': { 'Running': self._containers[container_id] } } return info def logs(self, container_id): logs = ['---logs_begin---'] for i in range(random.randint(1, 20)): logs.append(self._generate_string(random.randint(5, 42))) logs.append('---logs_end---') return '\n'.join(logs) def create_container(self, **kwargs): self.container_create.append(kwargs) container_id = self._generate_string() self._containers[container_id] = None self._set_running(container_id, False) return self.inspect_container(container_id) def remove_container(self, container_id, **kwargs): self._check_exists(container_id) del self._containers[container_id] def start(self, container_id, **kwargs): self.container_start.append(kwargs) self._set_running(container_id, True) def stop(self, container_id): self._set_running(container_id, False) def kill(self, container_id): self._set_running(container_id, False) def pull(self, image): self.pulled_images.append(image) def version(self, api_version=True): if not self.version_info: self.version_info['ApiVersion'] = '1.15' return self.version_info def set_api_version(self, version): self.version_info['ApiVersion'] = version heat-10.0.0/contrib/heat_docker/heat_docker/resources/0000775000175100017510000000000013245512113022724 5ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/resources/__init__.py0000666000175100017510000000000013245511546025036 0ustar zuulzuul00000000000000heat-10.0.0/contrib/heat_docker/heat_docker/resources/docker_container.py0000666000175100017510000005124613245511554026631 0ustar zuulzuul00000000000000# # Copyright (c) 2013 Docker, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import distutils from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support LOG = logging.getLogger(__name__) DOCKER_INSTALLED = False MIN_API_VERSION_MAP = {'read_only': '1.17', 'cpu_shares': '1.8', 'devices': '1.14', 'cpu_set': '1.12'} DEVICE_PATH_REGEX = r"^/dev/[/_\-a-zA-Z0-9]+$" # conditionally import so tests can work without having the dependency # satisfied try: import docker DOCKER_INSTALLED = True except ImportError: docker = None class DockerContainer(resource.Resource): support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) PROPERTIES = ( DOCKER_ENDPOINT, HOSTNAME, USER, MEMORY, PORT_SPECS, PRIVILEGED, TTY, OPEN_STDIN, STDIN_ONCE, ENV, CMD, DNS, IMAGE, VOLUMES, VOLUMES_FROM, PORT_BINDINGS, LINKS, NAME, RESTART_POLICY, CAP_ADD, CAP_DROP, READ_ONLY, CPU_SHARES, DEVICES, CPU_SET ) = ( 'docker_endpoint', 'hostname', 'user', 'memory', 'port_specs', 'privileged', 'tty', 'open_stdin', 'stdin_once', 'env', 'cmd', 'dns', 'image', 'volumes', 'volumes_from', 'port_bindings', 'links', 'name', 'restart_policy', 'cap_add', 'cap_drop', 'read_only', 'cpu_shares', 'devices', 'cpu_set' ) ATTRIBUTES = ( INFO, NETWORK_INFO, NETWORK_IP, NETWORK_GATEWAY, NETWORK_TCP_PORTS, NETWORK_UDP_PORTS, LOGS, LOGS_HEAD, LOGS_TAIL, ) = ( 'info', 'network_info', 'network_ip', 'network_gateway', 'network_tcp_ports', 'network_udp_ports', 'logs', 'logs_head', 'logs_tail', ) _RESTART_POLICY_KEYS = ( POLICY_NAME, POLICY_MAXIMUM_RETRY_COUNT, ) = ( 'Name', 'MaximumRetryCount', ) _DEVICES_KEYS = ( PATH_ON_HOST, PATH_IN_CONTAINER, PERMISSIONS ) = ( 'path_on_host', 'path_in_container', 'permissions' ) _CAPABILITIES = ['SETPCAP', 'SYS_MODULE', 'SYS_RAWIO', 'SYS_PACCT', 'SYS_ADMIN', 'SYS_NICE', 'SYS_RESOURCE', 'SYS_TIME', 'SYS_TTY_CONFIG', 'MKNOD', 'AUDIT_WRITE', 'AUDIT_CONTROL', 'MAC_OVERRIDE', 'MAC_ADMIN', 'NET_ADMIN', 'SYSLOG', 'CHOWN', 'NET_RAW', 'DAC_OVERRIDE', 'FOWNER', 'DAC_READ_SEARCH', 'FSETID', 'KILL', 'SETGID', 'SETUID', 'LINUX_IMMUTABLE', 'NET_BIND_SERVICE', 'NET_BROADCAST', 'IPC_LOCK', 'IPC_OWNER', 'SYS_CHROOT', 'SYS_PTRACE', 'SYS_BOOT', 'LEASE', 'SETFCAP', 'WAKE_ALARM', 'BLOCK_SUSPEND', 'ALL'] properties_schema = { DOCKER_ENDPOINT: properties.Schema( properties.Schema.STRING, _('Docker daemon endpoint (by default the local docker daemon ' 'will be used).'), default=None ), HOSTNAME: properties.Schema( properties.Schema.STRING, _('Hostname of the container.'), default='' ), USER: properties.Schema( properties.Schema.STRING, _('Username or UID.'), default='' ), MEMORY: properties.Schema( properties.Schema.INTEGER, _('Memory limit (Bytes).') ), PORT_SPECS: properties.Schema( properties.Schema.LIST, _('TCP/UDP ports mapping.'), default=None ), PORT_BINDINGS: properties.Schema( properties.Schema.MAP, _('TCP/UDP ports bindings.'), ), LINKS: properties.Schema( properties.Schema.MAP, _('Links to other containers.'), ), NAME: properties.Schema( properties.Schema.STRING, _('Name of the container.'), ), PRIVILEGED: properties.Schema( properties.Schema.BOOLEAN, _('Enable extended privileges.'), default=False ), TTY: properties.Schema( properties.Schema.BOOLEAN, _('Allocate a pseudo-tty.'), default=False ), OPEN_STDIN: properties.Schema( properties.Schema.BOOLEAN, _('Open stdin.'), default=False ), STDIN_ONCE: properties.Schema( properties.Schema.BOOLEAN, _('If true, close stdin after the 1 attached client disconnects.'), default=False ), ENV: properties.Schema( properties.Schema.LIST, _('Set environment variables.'), ), CMD: properties.Schema( properties.Schema.LIST, _('Command to run after spawning the container.'), default=[] ), DNS: properties.Schema( properties.Schema.LIST, _('Set custom dns servers.'), ), IMAGE: properties.Schema( properties.Schema.STRING, _('Image name.') ), VOLUMES: properties.Schema( properties.Schema.MAP, _('Create a bind mount.'), default={} ), VOLUMES_FROM: properties.Schema( properties.Schema.LIST, _('Mount all specified volumes.'), default='' ), RESTART_POLICY: properties.Schema( properties.Schema.MAP, _('Restart policies (only supported for API version >= 1.2.0).'), schema={ POLICY_NAME: properties.Schema( properties.Schema.STRING, _('The behavior to apply when the container exits.'), default='no', constraints=[ constraints.AllowedValues(['no', 'on-failure', 'always']), ] ), POLICY_MAXIMUM_RETRY_COUNT: properties.Schema( properties.Schema.INTEGER, _('A maximum restart count for the ' 'on-failure policy.'), default=0 ) }, default={}, support_status=support.SupportStatus(version='2015.1') ), CAP_ADD: properties.Schema( properties.Schema.LIST, _('Be used to add kernel capabilities (only supported for ' 'API version >= 1.2.0).'), schema=properties.Schema( properties.Schema.STRING, _('The security features provided by Linux kernels.'), constraints=[ constraints.AllowedValues(_CAPABILITIES), ] ), default=[], support_status=support.SupportStatus(version='2015.1') ), CAP_DROP: properties.Schema( properties.Schema.LIST, _('Be used to drop kernel capabilities (only supported for ' 'API version >= 1.2.0).'), schema=properties.Schema( properties.Schema.STRING, _('The security features provided by Linux kernels.'), constraints=[ constraints.AllowedValues(_CAPABILITIES), ] ), default=[], support_status=support.SupportStatus(version='2015.1') ), READ_ONLY: properties.Schema( properties.Schema.BOOLEAN, _('If true, mount the container\'s root filesystem ' 'as read only (only supported for API version >= %s).') % MIN_API_VERSION_MAP['read_only'], default=False, support_status=support.SupportStatus(version='2015.1'), ), CPU_SHARES: properties.Schema( properties.Schema.INTEGER, _('Relative weight which determines the allocation of the CPU ' 'processing power(only supported for API version >= %s).') % MIN_API_VERSION_MAP['cpu_shares'], default=0, support_status=support.SupportStatus(version='5.0.0'), ), DEVICES: properties.Schema( properties.Schema.LIST, _('Device mappings (only supported for API version >= %s).') % MIN_API_VERSION_MAP['devices'], schema=properties.Schema( properties.Schema.MAP, schema={ PATH_ON_HOST: properties.Schema( properties.Schema.STRING, _('The device path on the host.'), constraints=[ constraints.Length(max=255), constraints.AllowedPattern(DEVICE_PATH_REGEX), ], required=True ), PATH_IN_CONTAINER: properties.Schema( properties.Schema.STRING, _('The device path of the container' ' mappings to the host.'), constraints=[ constraints.Length(max=255), constraints.AllowedPattern(DEVICE_PATH_REGEX), ], ), PERMISSIONS: properties.Schema( properties.Schema.STRING, _('The permissions of the container to' ' read/write/create the devices.'), constraints=[ constraints.AllowedValues(['r', 'w', 'm', 'rw', 'rm', 'wm', 'rwm']), ], default='rwm' ) } ), default=[], support_status=support.SupportStatus(version='5.0.0'), ), CPU_SET: properties.Schema( properties.Schema.STRING, _('The CPUs in which to allow execution ' '(only supported for API version >= %s).') % MIN_API_VERSION_MAP['cpu_set'], support_status=support.SupportStatus(version='5.0.0'), ) } attributes_schema = { INFO: attributes.Schema( _('Container info.') ), NETWORK_INFO: attributes.Schema( _('Container network info.') ), NETWORK_IP: attributes.Schema( _('Container ip address.') ), NETWORK_GATEWAY: attributes.Schema( _('Container ip gateway.') ), NETWORK_TCP_PORTS: attributes.Schema( _('Container TCP ports.') ), NETWORK_UDP_PORTS: attributes.Schema( _('Container UDP ports.') ), LOGS: attributes.Schema( _('Container logs.') ), LOGS_HEAD: attributes.Schema( _('Container first logs line.') ), LOGS_TAIL: attributes.Schema( _('Container last logs line.') ), } def get_client(self): client = None if DOCKER_INSTALLED: endpoint = self.properties.get(self.DOCKER_ENDPOINT) if endpoint: client = docker.Client(endpoint) else: client = docker.Client() return client def _parse_networkinfo_ports(self, networkinfo): tcp = [] udp = [] for port, info in six.iteritems(networkinfo['Ports']): p = port.split('/') if not info or len(p) != 2 or 'HostPort' not in info[0]: continue port = info[0]['HostPort'] if p[1] == 'tcp': tcp.append(port) elif p[1] == 'udp': udp.append(port) return (','.join(tcp), ','.join(udp)) def _container_networkinfo(self, client, resource_id): info = client.inspect_container(self.resource_id) networkinfo = info['NetworkSettings'] ports = self._parse_networkinfo_ports(networkinfo) networkinfo['TcpPorts'] = ports[0] networkinfo['UdpPorts'] = ports[1] return networkinfo def _resolve_attribute(self, name): if not self.resource_id: return if name == 'info': client = self.get_client() return client.inspect_container(self.resource_id) if name == 'network_info': client = self.get_client() networkinfo = self._container_networkinfo(client, self.resource_id) return networkinfo if name == 'network_ip': client = self.get_client() networkinfo = self._container_networkinfo(client, self.resource_id) return networkinfo['IPAddress'] if name == 'network_gateway': client = self.get_client() networkinfo = self._container_networkinfo(client, self.resource_id) return networkinfo['Gateway'] if name == 'network_tcp_ports': client = self.get_client() networkinfo = self._container_networkinfo(client, self.resource_id) return networkinfo['TcpPorts'] if name == 'network_udp_ports': client = self.get_client() networkinfo = self._container_networkinfo(client, self.resource_id) return networkinfo['UdpPorts'] if name == 'logs': client = self.get_client() logs = client.logs(self.resource_id) return logs if name == 'logs_head': client = self.get_client() logs = client.logs(self.resource_id) return logs.split('\n')[0] if name == 'logs_tail': client = self.get_client() logs = client.logs(self.resource_id) return logs.split('\n').pop() def handle_create(self): create_args = { 'image': self.properties[self.IMAGE], 'command': self.properties[self.CMD], 'hostname': self.properties[self.HOSTNAME], 'user': self.properties[self.USER], 'stdin_open': self.properties[self.OPEN_STDIN], 'tty': self.properties[self.TTY], 'mem_limit': self.properties[self.MEMORY], 'ports': self.properties[self.PORT_SPECS], 'environment': self.properties[self.ENV], 'dns': self.properties[self.DNS], 'volumes': self.properties[self.VOLUMES], 'name': self.properties[self.NAME], 'cpu_shares': self.properties[self.CPU_SHARES], 'cpuset': self.properties[self.CPU_SET] } client = self.get_client() client.pull(self.properties[self.IMAGE]) result = client.create_container(**create_args) container_id = result['Id'] self.resource_id_set(container_id) start_args = {} if self.properties[self.PRIVILEGED]: start_args[self.PRIVILEGED] = True if self.properties[self.VOLUMES]: start_args['binds'] = self.properties[self.VOLUMES] if self.properties[self.VOLUMES_FROM]: start_args['volumes_from'] = self.properties[self.VOLUMES_FROM] if self.properties[self.PORT_BINDINGS]: start_args['port_bindings'] = self.properties[self.PORT_BINDINGS] if self.properties[self.LINKS]: start_args['links'] = self.properties[self.LINKS] if self.properties[self.RESTART_POLICY]: start_args['restart_policy'] = self.properties[self.RESTART_POLICY] if self.properties[self.CAP_ADD]: start_args['cap_add'] = self.properties[self.CAP_ADD] if self.properties[self.CAP_DROP]: start_args['cap_drop'] = self.properties[self.CAP_DROP] if self.properties[self.READ_ONLY]: start_args[self.READ_ONLY] = True if (self.properties[self.DEVICES] and not self.properties[self.PRIVILEGED]): start_args['devices'] = self._get_mapping_devices( self.properties[self.DEVICES]) client.start(container_id, **start_args) return container_id def _get_mapping_devices(self, devices): actual_devices = [] for device in devices: if device[self.PATH_IN_CONTAINER]: actual_devices.append(':'.join( [device[self.PATH_ON_HOST], device[self.PATH_IN_CONTAINER], device[self.PERMISSIONS]])) else: actual_devices.append(':'.join( [device[self.PATH_ON_HOST], device[self.PATH_ON_HOST], device[self.PERMISSIONS]])) return actual_devices def _get_container_status(self, container_id): client = self.get_client() info = client.inspect_container(container_id) return info['State'] def check_create_complete(self, container_id): status = self._get_container_status(container_id) exit_status = status.get('ExitCode') if exit_status is not None and exit_status != 0: logs = self.get_client().logs(self.resource_id) raise exception.ResourceInError(resource_status=self.FAILED, status_reason=logs) return status['Running'] def handle_delete(self): if self.resource_id is None: return client = self.get_client() try: client.kill(self.resource_id) except docker.errors.APIError as ex: if ex.response.status_code != 404: raise return self.resource_id def check_delete_complete(self, container_id): if container_id is None: return True try: status = self._get_container_status(container_id) if not status['Running']: client = self.get_client() client.remove_container(container_id) except docker.errors.APIError as ex: if ex.response.status_code == 404: return True raise return False def handle_suspend(self): if not self.resource_id: return client = self.get_client() client.stop(self.resource_id) return self.resource_id def check_suspend_complete(self, container_id): status = self._get_container_status(container_id) return (not status['Running']) def handle_resume(self): if not self.resource_id: return client = self.get_client() client.start(self.resource_id) return self.resource_id def check_resume_complete(self, container_id): status = self._get_container_status(container_id) return status['Running'] def validate(self): super(DockerContainer, self).validate() self._validate_arg_for_api_version() def _validate_arg_for_api_version(self): version = None for key in MIN_API_VERSION_MAP: if self.properties[key]: if not version: client = self.get_client() version = client.version()['ApiVersion'] min_version = MIN_API_VERSION_MAP[key] if compare_version(min_version, version) < 0: raise InvalidArgForVersion(arg=key, min_version=min_version) def resource_mapping(): return { 'DockerInc::Docker::Container': DockerContainer, } def available_resource_mapping(): if DOCKER_INSTALLED: return resource_mapping() else: LOG.warning("Docker plug-in loaded, but docker lib " "not installed.") return {} def compare_version(v1, v2): s1 = distutils.version.StrictVersion(v1) s2 = distutils.version.StrictVersion(v2) if s1 == s2: return 0 elif s1 > s2: return -1 else: return 1 class InvalidArgForVersion(exception.HeatException): msg_fmt = _('"%(arg)s" is not supported for API version ' '< "%(min_version)s"') heat-10.0.0/contrib/heat_docker/setup.cfg0000666000175100017510000000135313245511546020300 0ustar zuulzuul00000000000000[metadata] name = heat-contrib-docker summary = Heat resource for Docker containers description-file = README.md author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://docs.openstack.org/developer/heat/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] # Copy to /usr/lib/heat for plugin loading data_files = lib/heat/docker = heat_docker/resources/* [global] setup-hooks = pbr.hooks.setup_hook heat-10.0.0/contrib/heat_docker/requirements.txt0000666000175100017510000000002113245511554021731 0ustar zuulzuul00000000000000docker-py>=0.2.2 heat-10.0.0/contrib/heat_docker/setup.py0000666000175100017510000000202513245511546020166 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr'], pbr=True) heat-10.0.0/contrib/rackspace/0000775000175100017510000000000013245512113016126 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/0000775000175100017510000000000013245512113022576 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/client.py0000666000175100017510000002451513245511554024447 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Client Library for Keystone Resources.""" import weakref from keystoneclient.v2_0 import client as kc from oslo_config import cfg from oslo_log import log as logging from oslo_utils import importutils from heat.common import exception LOG = logging.getLogger('heat.common.keystoneclient') LOG.info("Keystone V2 loaded") class KeystoneClientV2(object): """Wrap keystone client so we can encapsulate logic used in resources. Note: This is intended to be initialized from a resource on a per-session basis, so the session context is passed in on initialization Also note that a copy of this is created every resource as self.keystone() via the code in engine/client.py, so there should not be any need to directly instantiate instances of this class inside resources themselves. """ def __init__(self, context): # If a trust_id is specified in the context, we immediately # authenticate so we can populate the context with a trust token # otherwise, we delay client authentication until needed to avoid # unnecessary calls to keystone. # # Note that when you obtain a token using a trust, it cannot be # used to reauthenticate and get another token, so we have to # get a new trust-token even if context.auth_token is set. # # - context.auth_url is expected to contain the v2.0 keystone endpoint self._context = weakref.ref(context) self._client = None if self.context.trust_id: # Create a connection to the v2 API, with the trust_id, this # populates self.context.auth_token with a trust-scoped token self._client = self._v2_client_init() @property def context(self): ctxt = self._context() assert ctxt is not None, "Need a reference to the context" return ctxt @property def client(self): if not self._client: self._client = self._v2_client_init() return self._client def _v2_client_init(self): kwargs = { 'auth_url': self.context.auth_url, 'endpoint': self.context.auth_url, 'region_name': cfg.CONF.region_name_for_services } if self.context.region_name is not None: kwargs['region_name'] = self.context.region_name auth_kwargs = {} # Note try trust_id first, as we can't reuse auth_token in that case if self.context.trust_id is not None: # We got a trust_id, so we use the admin credentials # to authenticate, then re-scope the token to the # trust impersonating the trustor user. # Note that this currently requires the trustor tenant_id # to be passed to the authenticate(), unlike the v3 call kwargs.update(self._service_admin_creds()) auth_kwargs['trust_id'] = self.context.trust_id auth_kwargs['tenant_id'] = self.context.tenant_id elif self.context.auth_token is not None: kwargs['tenant_name'] = self.context.project_name kwargs['token'] = self.context.auth_token elif self.context.password is not None: kwargs['username'] = self.context.username kwargs['password'] = self.context.password kwargs['tenant_name'] = self.context.project_name kwargs['tenant_id'] = self.context.tenant_id else: LOG.error("Keystone v2 API connection failed, no password " "or auth_token!") raise exception.AuthorizationFailure() kwargs['cacert'] = self._get_client_option('ca_file') kwargs['insecure'] = self._get_client_option('insecure') kwargs['cert'] = self._get_client_option('cert_file') kwargs['key'] = self._get_client_option('key_file') client = kc.Client(**kwargs) client.authenticate(**auth_kwargs) # If we are authenticating with a trust auth_kwargs are set, so set # the context auth_token with the re-scoped trust token if auth_kwargs: # Sanity check if not client.auth_ref.trust_scoped: LOG.error("v2 trust token re-scoping failed!") raise exception.AuthorizationFailure() # All OK so update the context with the token self.context.auth_token = client.auth_ref.auth_token self.context.auth_url = kwargs.get('auth_url') # Ensure the v2 API we're using is not impacted by keystone # bug #1239303, otherwise we can't trust the user_id if self.context.trustor_user_id != client.auth_ref.user_id: LOG.error("Trust impersonation failed, bug #1239303 " "suspected, you may need a newer keystone") raise exception.AuthorizationFailure() return client @staticmethod def _service_admin_creds(): # Import auth_token to have keystone_authtoken settings setup. importutils.import_module('keystonemiddleware.auth_token') creds = { 'username': cfg.CONF.keystone_authtoken.admin_user, 'password': cfg.CONF.keystone_authtoken.admin_password, 'auth_url': cfg.CONF.keystone_authtoken.auth_uri, 'tenant_name': cfg.CONF.keystone_authtoken.admin_tenant_name, } return creds def _get_client_option(self, option): # look for the option in the [clients_keystone] section # unknown options raise cfg.NoSuchOptError cfg.CONF.import_opt(option, 'heat.common.config', group='clients_keystone') v = getattr(cfg.CONF.clients_keystone, option) if v is not None: return v # look for the option in the generic [clients] section cfg.CONF.import_opt(option, 'heat.common.config', group='clients') return getattr(cfg.CONF.clients, option) def create_stack_user(self, username, password=''): """Create a user. User can be defined as part of a stack, either via template or created internally by a resource. This user will be added to the heat_stack_user_role as defined in the config Returns the keystone ID of the resulting user """ if len(username) > 64: LOG.warning("Truncating the username %s to the last 64 " "characters.", username) # get the last 64 characters of the username username = username[-64:] user = self.client.users.create(username, password, '%s@openstack.org' % username, tenant_id=self.context.tenant_id, enabled=True) # We add the new user to a special keystone role # This role is designed to allow easier differentiation of the # heat-generated "stack users" which will generally have credentials # deployed on an instance (hence are implicitly untrusted) roles = self.client.roles.list() stack_user_role = [r.id for r in roles if r.name == cfg.CONF.heat_stack_user_role] if len(stack_user_role) == 1: role_id = stack_user_role[0] LOG.debug("Adding user %(user)s to role %(role)s" % {'user': user.id, 'role': role_id}) self.client.roles.add_user_role(user.id, role_id, self.context.tenant_id) else: LOG.error("Failed to add user %(user)s to role %(role)s, " "check role exists!", {'user': username, 'role': cfg.CONF.heat_stack_user_role}) return user.id def delete_stack_user(self, user_id): self.client.users.delete(user_id) def delete_ec2_keypair(self, user_id, accesskey): self.client.ec2.delete(user_id, accesskey) def get_ec2_keypair(self, access, user_id=None): uid = user_id or self.client.auth_ref.user_id return self.client.ec2.get(uid, access) def create_ec2_keypair(self, user_id=None): uid = user_id or self.client.auth_ref.user_id return self.client.ec2.create(uid, self.context.tenant_id) def disable_stack_user(self, user_id): self.client.users.update_enabled(user_id, False) def enable_stack_user(self, user_id): self.client.users.update_enabled(user_id, True) def url_for(self, **kwargs): return self.client.service_catalog.url_for(**kwargs) @property def auth_token(self): return self.client.auth_token # ##################### # # V3 Compatible Methods # # ##################### # def create_stack_domain_user(self, username, project_id, password=None): return self.create_stack_user(username, password) def delete_stack_domain_user(self, user_id, project_id): return self.delete_stack_user(user_id) def create_stack_domain_project(self, project_id): """Use the tenant ID as domain project.""" return self.context.tenant_id def delete_stack_domain_project(self, project_id): """Pass through method since no project was created.""" pass def create_stack_domain_user_keypair(self, user_id, project_id): return self.create_ec2_keypair(user_id) def delete_stack_domain_user_keypair(self, user_id, project_id, credential_id): return self.delete_ec2_keypair(user_id, credential_id) # ###################### # # V3 Unsupported Methods # # ###################### # def create_trust_context(self): raise exception.NotSupported(feature='Keystone Trusts') def delete_trust(self, trust_id): raise exception.NotSupported(feature='Keystone Trusts') heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/__init__.py0000666000175100017510000000000013245511554024707 0ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/tests/0000775000175100017510000000000013245512113023740 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/tests/__init__.py0000666000175100017510000000007313245511554026063 0ustar zuulzuul00000000000000import sys from mox3 import mox sys.modules['mox'] = mox heat-10.0.0/contrib/rackspace/heat_keystoneclient_v2/tests/test_client.py0000666000175100017510000002567713245511554026662 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import mox from oslo_config import cfg from oslo_utils import importutils from heat.common import exception from heat.tests import common from heat.tests import utils from .. import client as heat_keystoneclient # noqa class KeystoneClientTest(common.HeatTestCase): """Test cases for heat.common.heat_keystoneclient.""" def setUp(self): super(KeystoneClientTest, self).setUp() self.ctx = utils.dummy_context() # Import auth_token to have keystone_authtoken settings setup. importutils.import_module('keystonemiddleware.auth_token') dummy_url = 'http://server.test:5000/v2.0' cfg.CONF.set_override('auth_uri', dummy_url, group='keystone_authtoken') cfg.CONF.set_override('admin_user', 'heat', group='keystone_authtoken') cfg.CONF.set_override('admin_password', 'verybadpass', group='keystone_authtoken') cfg.CONF.set_override('admin_tenant_name', 'service', group='keystone_authtoken') self.addCleanup(self.m.VerifyAll) def _stubs_v2(self, method='token', auth_ok=True, trust_scoped=True, user_id='trustor_user_id', region=None): self.mock_ks_client = self.m.CreateMock(heat_keystoneclient.kc.Client) self.m.StubOutWithMock(heat_keystoneclient.kc, "Client") if method == 'token': heat_keystoneclient.kc.Client( auth_url=mox.IgnoreArg(), endpoint=mox.IgnoreArg(), tenant_name='test_tenant', token='abcd1234', cacert=None, cert=None, insecure=False, region_name=region, key=None).AndReturn(self.mock_ks_client) self.mock_ks_client.authenticate().AndReturn(auth_ok) elif method == 'password': heat_keystoneclient.kc.Client( auth_url=mox.IgnoreArg(), endpoint=mox.IgnoreArg(), tenant_name='test_tenant', tenant_id='test_tenant_id', username='test_username', password='password', cacert=None, cert=None, insecure=False, region_name=region, key=None).AndReturn(self.mock_ks_client) self.mock_ks_client.authenticate().AndReturn(auth_ok) if method == 'trust': heat_keystoneclient.kc.Client( auth_url='http://server.test:5000/v2.0', endpoint='http://server.test:5000/v2.0', password='verybadpass', tenant_name='service', username='heat', cacert=None, cert=None, insecure=False, region_name=region, key=None).AndReturn(self.mock_ks_client) self.mock_ks_client.authenticate(trust_id='atrust123', tenant_id='test_tenant_id' ).AndReturn(auth_ok) self.mock_ks_client.auth_ref = self.m.CreateMockAnything() self.mock_ks_client.auth_ref.trust_scoped = trust_scoped self.mock_ks_client.auth_ref.auth_token = 'atrusttoken' self.mock_ks_client.auth_ref.user_id = user_id def test_username_length(self): """Test that user names >64 characters are properly truncated.""" self._stubs_v2() # a >64 character user name and the expected version long_user_name = 'U' * 64 + 'S' good_user_name = long_user_name[-64:] # mock keystone client user functions self.mock_ks_client.users = self.m.CreateMockAnything() mock_user = self.m.CreateMockAnything() # when keystone is called, the name should have been truncated # to the last 64 characters of the long name (self.mock_ks_client.users.create(good_user_name, 'password', mox.IgnoreArg(), enabled=True, tenant_id=mox.IgnoreArg()) .AndReturn(mock_user)) # mock out the call to roles; will send an error log message but does # not raise an exception self.mock_ks_client.roles = self.m.CreateMockAnything() self.mock_ks_client.roles.list().AndReturn([]) self.m.ReplayAll() # call create_stack_user with a long user name. # the cleanup VerifyAll should verify that though we passed # long_user_name, keystone was actually called with a truncated # user name self.ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) heat_ks_client.create_stack_user(long_user_name, password='password') def test_init_v2_password(self): """Test creating the client, user/password context.""" self._stubs_v2(method='password') self.m.ReplayAll() self.ctx.auth_token = None self.ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertIsNotNone(heat_ks_client.client) def test_init_v2_bad_nocreds(self): """Test creating the client without trusts, no credentials.""" self.ctx.auth_token = None self.ctx.username = None self.ctx.password = None self.ctx.trust_id = None heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertRaises(exception.AuthorizationFailure, heat_ks_client._v2_client_init) def test_trust_init(self): """Test consuming a trust when initializing.""" self._stubs_v2(method='trust') self.m.ReplayAll() self.ctx.username = None self.ctx.password = None self.ctx.auth_token = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) client = heat_ks_client.client self.assertIsNotNone(client) def test_trust_init_fail(self): """Test consuming a trust when initializing, error scoping.""" self._stubs_v2(method='trust', trust_scoped=False) self.m.ReplayAll() self.ctx.username = None self.ctx.password = None self.ctx.auth_token = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' self.assertRaises(exception.AuthorizationFailure, heat_keystoneclient.KeystoneClientV2, self.ctx) def test_trust_init_fail_impersonation(self): """Test consuming a trust when initializing, impersonation error.""" self._stubs_v2(method='trust', user_id='wrong_user_id') self.m.ReplayAll() self.ctx.username = 'heat' self.ctx.password = None self.ctx.auth_token = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' self.assertRaises(exception.AuthorizationFailure, heat_keystoneclient.KeystoneClientV2, self.ctx) def test_trust_init_pw(self): """Test trust_id is takes precedence username/password specified.""" self._stubs_v2(method='trust') self.m.ReplayAll() self.ctx.auth_token = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertIsNotNone(heat_ks_client._client) def test_trust_init_token(self): """Test trust_id takes precedence when token specified.""" self._stubs_v2(method='trust') self.m.ReplayAll() self.ctx.username = None self.ctx.password = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertIsNotNone(heat_ks_client._client) def test_region_name(self): """Test region_name is used when specified.""" self._stubs_v2(method='trust', region='region123') self.m.ReplayAll() self.ctx.username = None self.ctx.password = None self.ctx.auth_token = None self.ctx.trust_id = 'atrust123' self.ctx.trustor_user_id = 'trustor_user_id' self.ctx.region_name = 'region123' heat_keystoneclient.KeystoneClientV2(self.ctx) self.m.VerifyAll() # ##################### # # V3 Compatible Methods # # ##################### # def test_create_stack_domain_user_pass_through_to_create_stack_user(self): heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) mock_create_stack_user = mock.Mock() heat_ks_client.create_stack_user = mock_create_stack_user heat_ks_client.create_stack_domain_user('username', 'project_id', 'password') mock_create_stack_user.assert_called_once_with('username', 'password') def test_delete_stack_domain_user_pass_through_to_delete_stack_user(self): heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) mock_delete_stack_user = mock.Mock() heat_ks_client.delete_stack_user = mock_delete_stack_user heat_ks_client.delete_stack_domain_user('user_id', 'project_id') mock_delete_stack_user.assert_called_once_with('user_id') def test_create_stack_domain_project(self): tenant_id = self.ctx.tenant_id ks = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertEqual(tenant_id, ks.create_stack_domain_project('fakeid')) def test_delete_stack_domain_project(self): heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertIsNone(heat_ks_client.delete_stack_domain_project('fakeid')) # ###################### # # V3 Unsupported Methods # # ###################### # def test_create_trust_context(self): heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertRaises(exception.NotSupported, heat_ks_client.create_trust_context) def test_delete_trust(self): heat_ks_client = heat_keystoneclient.KeystoneClientV2(self.ctx) self.assertRaises(exception.NotSupported, heat_ks_client.delete_trust, 'fake_trust_id') heat-10.0.0/contrib/rackspace/README.md0000666000175100017510000000410113245511554017413 0ustar zuulzuul00000000000000# Heat resources for working with the Rackspace Cloud The resources and configuration in this module are for using Heat with the Rackspace Cloud. These resources either allow using Rackspace services that don't have equivalent services in OpenStack or account for differences between a generic OpenStack deployment and Rackspace Cloud. This package also includes a Keystone V2 compatible client plugin, that can be used in place of the default client for clouds running older versions of Keystone. ## Installation ### 1. Install the Rackspace plugins in Heat NOTE: These instructions assume the value of heat.conf plugin_dirs includes the default directory /usr/lib/heat. - To install the plugin, from this directory run: sudo python ./setup.py install - (Optional) If you want to enable the Keystone V2 client plugin, set the `keystone_backend` option to `heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2` ### 2. Restart heat Only the process "heat-engine" needs to be restarted to load the newly installed plugin. ## Resources The following resources are provided for compatibility: * `Rackspace::Cloud::Server`: >Provide compatibility with `OS::Nova::Server` and allow for working `user_data` and `Metadata`. This is deprecated and should be replaced with `OS::Nova::Server` once service compatibility is implemented by Rackspace. * `Rackspace::Cloud::LoadBalancer`: >Use the Rackspace Cloud Loadbalancer service; not compatible with `OS::Neutron::LoadBalancer`. ### Usage #### Templates #### Configuration ## Heat Keystone V2 Note that some forward compatibility decisions had to be made for the Keystone V2 client plugin: * Stack domain users are created as users on the stack owner's tenant rather than the stack's domain * Trusts are not supported ### How it works By setting the `keystone_backend` option, the KeystoneBackend class in `heat/engine/clients/os/keystone/heat_keystoneclient.py` will instantiate the plugin KeystoneClientV2 class and use that instead of the default client in `heat/engine/clients/os/keystone/heat_keystoneclient.py`. heat-10.0.0/contrib/rackspace/setup.cfg0000666000175100017510000000257113245511554017766 0ustar zuulzuul00000000000000[metadata] name = heat-contrib-rackspace summary = Heat resources for working with the Rackspace Cloud description-file = README.md author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://docs.openstack.org/developer/heat/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] packages = rackspace # Copy to /usr/lib/heat for non-stevedore plugin loading data_files = lib/heat/rackspace = rackspace/resources/* lib/heat/heat_keystoneclient_v2 = heat_keystoneclient_v2/* [entry_points] heat.clients = auto_scale = rackspace.clients:RackspaceAutoScaleClient cinder = rackspace.clients:RackspaceCinderClient cloud_dns = rackspace.clients:RackspaceCloudDNSClient cloud_lb = rackspace.clients:RackspaceCloudLBClient cloud_networks = rackspace.clients:RackspaceCloudNetworksClient glance = rackspace.clients:RackspaceGlanceClient nova = rackspace.clients:RackspaceNovaClient trove = rackspace.clients:RackspaceTroveClient swift = rackspace.clients:RackspaceSwiftClient [global] setup-hooks = pbr.hooks.setup_hook heat-10.0.0/contrib/rackspace/requirements.txt0000666000175100017510000000007613245511554021427 0ustar zuulzuul00000000000000-e git+https://github.com/rackerlabs/heat-pyrax.git#egg=pyrax heat-10.0.0/contrib/rackspace/setup.py0000666000175100017510000000202513245511554017651 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr'], pbr=True) heat-10.0.0/contrib/rackspace/rackspace/0000775000175100017510000000000013245512113020062 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/rackspace/clients.py0000666000175100017510000002112613245511554022111 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Client Libraries for Rackspace Resources.""" import hashlib import random import time from glanceclient import client as gc from oslo_config import cfg from oslo_log import log as logging from six.moves.urllib import parse from swiftclient import utils as swiftclient_utils from troveclient import client as tc from heat.common import exception from heat.engine.clients import client_plugin from heat.engine.clients.os import cinder from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine.clients.os import swift from heat.engine.clients.os import trove LOG = logging.getLogger(__name__) try: import pyrax except ImportError: pyrax = None class RackspaceClientPlugin(client_plugin.ClientPlugin): pyrax = None def _get_client(self, name): if self.pyrax is None: self._authenticate() return self.pyrax.get_client( name, cfg.CONF.region_name_for_services) def _authenticate(self): """Create an authenticated client context.""" self.pyrax = pyrax.create_context("rackspace") self.pyrax.auth_endpoint = self.context.auth_url LOG.info("Authenticating username: %s", self.context.username) tenant = self.context.tenant_id tenant_name = self.context.tenant self.pyrax.auth_with_token(self.context.auth_token, tenant_id=tenant, tenant_name=tenant_name) if not self.pyrax.authenticated: LOG.warning("Pyrax Authentication Failed.") raise exception.AuthorizationFailure() LOG.info("User %s authenticated successfully.", self.context.username) class RackspaceAutoScaleClient(RackspaceClientPlugin): def _create(self): """Rackspace Auto Scale client.""" return self._get_client("autoscale") class RackspaceCloudLBClient(RackspaceClientPlugin): def _create(self): """Rackspace cloud loadbalancer client.""" return self._get_client("load_balancer") class RackspaceCloudDNSClient(RackspaceClientPlugin): def _create(self): """Rackspace cloud dns client.""" return self._get_client("dns") class RackspaceNovaClient(nova.NovaClientPlugin, RackspaceClientPlugin): def _create(self): """Rackspace cloudservers client.""" client = self._get_client("compute") if not client: client = super(RackspaceNovaClient, self)._create() return client class RackspaceCloudNetworksClient(RackspaceClientPlugin): def _create(self): """Rackspace cloud networks client. Though pyrax "fixed" the network client bugs that were introduced in 1.8, it still doesn't work for contexts because of caching of the nova client. """ if not self.pyrax: self._authenticate() # need special handling now since the contextual # pyrax doesn't handle "networks" not being in # the catalog ep = pyrax._get_service_endpoint( self.pyrax, "compute", region=cfg.CONF.region_name_for_services) cls = pyrax._client_classes['compute:network'] client = cls(self.pyrax, region_name=cfg.CONF.region_name_for_services, management_url=ep) return client class RackspaceTroveClient(trove.TroveClientPlugin): """Rackspace trove client. Since the pyrax module uses its own client implementation for Cloud Databases, we have to skip pyrax on this one and override the super implementation to account for custom service type and regionalized management url. """ def _create(self): service_type = "rax:database" con = self.context endpoint_type = self._get_client_option('trove', 'endpoint_type') args = { 'service_type': service_type, 'auth_url': con.auth_url, 'proxy_token': con.auth_token, 'username': None, 'password': None, 'cacert': self._get_client_option('trove', 'ca_file'), 'insecure': self._get_client_option('trove', 'insecure'), 'endpoint_type': endpoint_type } client = tc.Client('1.0', **args) region = cfg.CONF.region_name_for_services management_url = self.url_for(service_type=service_type, endpoint_type=endpoint_type, region_name=region) client.client.auth_token = con.auth_token client.client.management_url = management_url return client class RackspaceCinderClient(cinder.CinderClientPlugin): def _create(self): """Override the region for the cinder client.""" client = super(RackspaceCinderClient, self)._create() management_url = self.url_for( service_type='volume', region_name=cfg.CONF.region_name_for_services) client.client.management_url = management_url return client class RackspaceSwiftClient(swift.SwiftClientPlugin): def is_valid_temp_url_path(self, path): """Return True if path is a valid Swift TempURL path, False otherwise. A Swift TempURL path must: - Be five parts, ['', 'v1', 'account', 'container', 'object'] - Be a v1 request - Have account, container, and object values - Have an object value with more than just '/'s :param path: The TempURL path :type path: string """ parts = path.split('/', 4) return bool(len(parts) == 5 and not parts[0] and parts[1] == 'v1' and parts[2] and parts[3] and parts[4].strip('/')) def get_temp_url(self, container_name, obj_name, timeout=None, method='PUT'): """Return a Swift TempURL.""" def tenant_uuid(): access = self.context.auth_token_info['access'] for role in access['user']['roles']: if role['name'] == 'object-store:default': return role['tenantId'] key_header = 'x-account-meta-temp-url-key' if key_header in self.client().head_account(): key = self.client().head_account()[key_header] else: key = hashlib.sha224(str(random.getrandbits(256))).hexdigest()[:32] self.client().post_account({key_header: key}) path = '/v1/%s/%s/%s' % (tenant_uuid(), container_name, obj_name) if timeout is None: timeout = swift.MAX_EPOCH - 60 - time.time() tempurl = swiftclient_utils.generate_temp_url(path, timeout, key, method) sw_url = parse.urlparse(self.client().url) return '%s://%s%s' % (sw_url.scheme, sw_url.netloc, tempurl) class RackspaceGlanceClient(glance.GlanceClientPlugin): def _create(self, version=None): con = self.context endpoint_type = self._get_client_option('glance', 'endpoint_type') endpoint = self.url_for( service_type='image', endpoint_type=endpoint_type, region_name=cfg.CONF.region_name_for_services) # Rackspace service catalog includes a tenant scoped glance # endpoint so we have to munge the url a bit glance_url = parse.urlparse(endpoint) # remove the tenant and following from the url endpoint = "%s://%s" % (glance_url.scheme, glance_url.hostname) args = { 'auth_url': con.auth_url, 'service_type': 'image', 'project_id': con.tenant, 'token': self.auth_token, 'endpoint_type': endpoint_type, 'ca_file': self._get_client_option('glance', 'ca_file'), 'cert_file': self._get_client_option('glance', 'cert_file'), 'key_file': self._get_client_option('glance', 'key_file'), 'insecure': self._get_client_option('glance', 'insecure') } return gc.Client('2', endpoint, **args) heat-10.0.0/contrib/rackspace/rackspace/__init__.py0000666000175100017510000000006013245511554022201 0ustar zuulzuul00000000000000"""Contributed Rackspace-specific resources.""" heat-10.0.0/contrib/rackspace/rackspace/tests/0000775000175100017510000000000013245512113021224 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/rackspace/tests/test_rackspace_cloud_server.py0000666000175100017510000006534413245511554027373 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_utils import uuidutils import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import neutron from heat.engine.clients.os import nova from heat.engine import environment from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests.openstack.nova import fakes from heat.tests import utils from ..resources import cloud_server # noqa wp_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "WordPress", "Parameters" : { "key_name" : { "Description" : "key_name", "Type" : "String", "Default" : "test" } }, "Resources" : { "WebServer": { "Type": "OS::Nova::Server", "Properties": { "image" : "CentOS 5.2", "flavor" : "256 MB Server", "key_name" : "test", "user_data" : "wordpress" } } } } ''' cfg.CONF.import_opt('region_name_for_services', 'heat.common.config') class CloudServersTest(common.HeatTestCase): def setUp(self): super(CloudServersTest, self).setUp() cfg.CONF.set_override('region_name_for_services', 'RegionOne') self.ctx = utils.dummy_context() self.fc = fakes.FakeClient() mock_nova_create = mock.Mock() self.ctx.clients.client_plugin( 'nova')._create = mock_nova_create mock_nova_create.return_value = self.fc # Test environment may not have pyrax client library installed and if # pyrax is not installed resource class would not be registered. # So register resource provider class explicitly for unit testing. resource._register_class("OS::Nova::Server", cloud_server.CloudServer) def _setup_test_stack(self, stack_name): t = template_format.parse(wp_template) templ = template.Template( t, env=environment.Environment({'key_name': 'test'})) self.stack = parser.Stack(self.ctx, stack_name, templ, stack_id=uuidutils.generate_uuid()) return (templ, self.stack) def _setup_test_server(self, return_server, name, image_id=None, override_name=False, stub_create=True): stack_name = '%s_s' % name (tmpl, stack) = self._setup_test_stack(stack_name) tmpl.t['Resources']['WebServer']['Properties'][ 'image'] = image_id or 'CentOS 5.2' tmpl.t['Resources']['WebServer']['Properties'][ 'flavor'] = '256 MB Server' self.patchobject(neutron.NeutronClientPlugin, 'find_resourceid_by_name_or_id', return_value='aaaaaa') self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', return_value=1) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) server_name = '%s' % name if override_name: tmpl.t['Resources']['WebServer']['Properties'][ 'name'] = server_name resource_defns = tmpl.resource_definitions(stack) server = cloud_server.CloudServer(server_name, resource_defns['WebServer'], stack) self.patchobject(nova.NovaClientPlugin, '_create', return_value=self.fc) self.patchobject(server, 'store_external_ports') if stub_create: self.patchobject(self.fc.servers, 'create', return_value=return_server) # mock check_create_complete innards self.patchobject(self.fc.servers, 'get', return_value=return_server) return server def _create_test_server(self, return_server, name, override_name=False, stub_create=True): server = self._setup_test_server(return_server, name, stub_create=stub_create) scheduler.TaskRunner(server.create)() return server def _mock_metadata_os_distro(self): image_data = mock.Mock(metadata={'os_distro': 'centos'}) self.fc.images.get = mock.Mock(return_value=image_data) def test_rackconnect_deployed(self): return_server = self.fc.servers.list()[1] return_server.metadata = { 'rackconnect_automation_status': 'DEPLOYED', 'rax_service_level_automation': 'Complete', } server = self._setup_test_server(return_server, 'test_rackconnect_deployed') server.context.roles = ['rack_connect'] scheduler.TaskRunner(server.create)() self.assertEqual('CREATE', server.action) self.assertEqual('COMPLETE', server.status) def test_rackconnect_failed(self): return_server = self.fc.servers.list()[1] return_server.metadata = { 'rackconnect_automation_status': 'FAILED', 'rax_service_level_automation': 'Complete', } server = self._setup_test_server(return_server, 'test_rackconnect_failed') server.context.roles = ['rack_connect'] create = scheduler.TaskRunner(server.create) exc = self.assertRaises(exception.ResourceFailure, create) self.assertEqual('Error: resources.test_rackconnect_failed: ' 'RackConnect automation FAILED', six.text_type(exc)) def test_rackconnect_unprocessable(self): return_server = self.fc.servers.list()[1] return_server.metadata = { 'rackconnect_automation_status': 'UNPROCESSABLE', 'rackconnect_unprocessable_reason': 'Fake reason', 'rax_service_level_automation': 'Complete', } server = self._setup_test_server(return_server, 'test_rackconnect_unprocessable') server.context.roles = ['rack_connect'] scheduler.TaskRunner(server.create)() self.assertEqual('CREATE', server.action) self.assertEqual('COMPLETE', server.status) def test_rackconnect_unknown(self): return_server = self.fc.servers.list()[1] return_server.metadata = { 'rackconnect_automation_status': 'FOO', 'rax_service_level_automation': 'Complete', } server = self._setup_test_server(return_server, 'test_rackconnect_unknown') server.context.roles = ['rack_connect'] create = scheduler.TaskRunner(server.create) exc = self.assertRaises(exception.ResourceFailure, create) self.assertEqual('Error: resources.test_rackconnect_unknown: ' 'Unknown RackConnect automation status: FOO', six.text_type(exc)) def test_rackconnect_deploying(self): return_server = self.fc.servers.list()[0] server = self._setup_test_server(return_server, 'srv_sts_bld') server.resource_id = 1234 server.context.roles = ['rack_connect'] check_iterations = [0] # Bind fake get method which check_create_complete will call def activate_status(server): check_iterations[0] += 1 if check_iterations[0] == 1: return_server.metadata.update({ 'rackconnect_automation_status': 'DEPLOYING', 'rax_service_level_automation': 'Complete', }) if check_iterations[0] == 2: return_server.status = 'ACTIVE' if check_iterations[0] > 3: return_server.metadata.update({ 'rackconnect_automation_status': 'DEPLOYED', }) return return_server self.patchobject(self.fc.servers, 'get', side_effect=activate_status) scheduler.TaskRunner(server.create)() self.assertEqual((server.CREATE, server.COMPLETE), server.state) def test_rackconnect_no_status(self): return_server = self.fc.servers.list()[0] server = self._setup_test_server(return_server, 'srv_sts_bld') server.resource_id = 1234 server.context.roles = ['rack_connect'] check_iterations = [0] # Bind fake get method which check_create_complete will call def activate_status(server): check_iterations[0] += 1 if check_iterations[0] == 1: return_server.status = 'ACTIVE' if check_iterations[0] > 2: return_server.metadata.update({ 'rackconnect_automation_status': 'DEPLOYED', 'rax_service_level_automation': 'Complete'}) return return_server self.patchobject(self.fc.servers, 'get', side_effect=activate_status) scheduler.TaskRunner(server.create)() self.assertEqual((server.CREATE, server.COMPLETE), server.state) def test_rax_automation_lifecycle(self): return_server = self.fc.servers.list()[0] server = self._setup_test_server(return_server, 'srv_sts_bld') server.resource_id = 1234 server.context.roles = ['rack_connect'] server.metadata = {} check_iterations = [0] # Bind fake get method which check_create_complete will call def activate_status(server): check_iterations[0] += 1 if check_iterations[0] == 1: return_server.status = 'ACTIVE' if check_iterations[0] == 2: return_server.metadata = { 'rackconnect_automation_status': 'DEPLOYED'} if check_iterations[0] == 3: return_server.metadata = { 'rackconnect_automation_status': 'DEPLOYED', 'rax_service_level_automation': 'In Progress'} if check_iterations[0] > 3: return_server.metadata = { 'rackconnect_automation_status': 'DEPLOYED', 'rax_service_level_automation': 'Complete'} return return_server self.patchobject(self.fc.servers, 'get', side_effect=activate_status) scheduler.TaskRunner(server.create)() self.assertEqual((server.CREATE, server.COMPLETE), server.state) def test_add_port_for_addresses(self): return_server = self.fc.servers.list()[1] return_server.metadata = {'rax_service_level_automation': 'Complete'} stack_name = 'test_stack' (tmpl, stack) = self._setup_test_stack(stack_name) resource_defns = tmpl.resource_definitions(stack) self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', return_value=1) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) server = cloud_server.CloudServer('WebServer', resource_defns['WebServer'], stack) self.patchobject(server, 'store_external_ports') class Interface(object): def __init__(self, id, addresses): self.identifier = id self.addresses = addresses @property def id(self): return self.identifier @property def ip_addresses(self): return self.addresses interfaces = [ { "id": "port-uuid-1", "ip_addresses": [ { "address": "4.5.6.7", "network_id": "00xx000-0xx0-0xx0-0xx0-00xxx000", "network_label": "public" }, { "address": "2001:4802:7805:104:be76:4eff:fe20:2063", "network_id": "00xx000-0xx0-0xx0-0xx0-00xxx000", "network_label": "public" } ], "mac_address": "fa:16:3e:8c:22:aa" }, { "id": "port-uuid-2", "ip_addresses": [ { "address": "5.6.9.8", "network_id": "11xx1-1xx1-xx11-1xx1-11xxxx11", "network_label": "public" } ], "mac_address": "fa:16:3e:8c:44:cc" }, { "id": "port-uuid-3", "ip_addresses": [ { "address": "10.13.12.13", "network_id": "1xx1-1xx1-xx11-1xx1-11xxxx11", "network_label": "private" } ], "mac_address": "fa:16:3e:8c:44:dd" } ] ifaces = [Interface(i['id'], i['ip_addresses']) for i in interfaces] expected = { 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa': [{'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:8c:22:aa', 'addr': '4.5.6.7', 'port': 'port-uuid-1', 'version': 4}, {'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:8c:33:bb', 'addr': '5.6.9.8', 'port': 'port-uuid-2', 'version': 4}], 'private': [{'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:8c:44:cc', 'addr': '10.13.12.13', 'port': 'port-uuid-3', 'version': 4}], 'public': [{'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:8c:22:aa', 'addr': '4.5.6.7', 'port': 'port-uuid-1', 'version': 4}, {'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:8c:33:bb', 'addr': '5.6.9.8', 'port': 'port-uuid-2', 'version': 4}]} server.client = mock.Mock() mock_client = mock.Mock() server.client.return_value = mock_client mock_ext = mock_client.os_virtual_interfacesv2_python_novaclient_ext mock_ext.list.return_value = ifaces resp = server._add_port_for_address(return_server) self.assertEqual(expected, resp) def test_rax_automation_build_error(self): return_server = self.fc.servers.list()[1] return_server.metadata = {'rax_service_level_automation': 'Build Error'} server = self._setup_test_server(return_server, 'test_managed_cloud_build_error') create = scheduler.TaskRunner(server.create) exc = self.assertRaises(exception.ResourceFailure, create) self.assertEqual('Error: resources.test_managed_cloud_build_error: ' 'Rackspace Cloud automation failed', six.text_type(exc)) def test_rax_automation_unknown(self): return_server = self.fc.servers.list()[1] return_server.metadata = {'rax_service_level_automation': 'FOO'} server = self._setup_test_server(return_server, 'test_managed_cloud_unknown') create = scheduler.TaskRunner(server.create) exc = self.assertRaises(exception.ResourceFailure, create) self.assertEqual('Error: resources.test_managed_cloud_unknown: ' 'Unknown Rackspace Cloud automation status: FOO', six.text_type(exc)) def _test_server_config_drive(self, user_data, config_drive, result, ud_format='RAW'): return_server = self.fc.servers.list()[1] return_server.metadata = {'rax_service_level_automation': 'Complete'} stack_name = 'no_user_data' self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id', return_value=1) self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) (tmpl, stack) = self._setup_test_stack(stack_name) properties = tmpl.t['Resources']['WebServer']['Properties'] properties['user_data'] = user_data properties['config_drive'] = config_drive properties['user_data_format'] = ud_format properties['software_config_transport'] = "POLL_TEMP_URL" resource_defns = tmpl.resource_definitions(stack) server = cloud_server.CloudServer('WebServer', resource_defns['WebServer'], stack) server.metadata = {'rax_service_level_automation': 'Complete'} self.patchobject(server, 'store_external_ports') self.patchobject(server, "_populate_deployments_metadata") mock_servers_create = mock.Mock(return_value=return_server) self.fc.servers.create = mock_servers_create self.patchobject(self.fc.servers, 'get', return_value=return_server) scheduler.TaskRunner(server.create)() mock_servers_create.assert_called_with( image=mock.ANY, flavor=mock.ANY, key_name=mock.ANY, name=mock.ANY, security_groups=mock.ANY, userdata=mock.ANY, scheduler_hints=mock.ANY, meta=mock.ANY, nics=mock.ANY, availability_zone=mock.ANY, block_device_mapping=mock.ANY, block_device_mapping_v2=mock.ANY, config_drive=result, disk_config=mock.ANY, reservation_id=mock.ANY, files=mock.ANY, admin_pass=mock.ANY) def test_server_user_data_no_config_drive(self): self._test_server_config_drive("my script", False, True) def test_server_user_data_config_drive(self): self._test_server_config_drive("my script", True, True) def test_server_no_user_data_config_drive(self): self._test_server_config_drive(None, True, True) def test_server_no_user_data_no_config_drive(self): self._test_server_config_drive(None, False, False) def test_server_no_user_data_software_config(self): self._test_server_config_drive(None, False, True, ud_format="SOFTWARE_CONFIG") @mock.patch.object(resource.Resource, "client_plugin") @mock.patch.object(resource.Resource, "client") class CloudServersValidationTests(common.HeatTestCase): def setUp(self): super(CloudServersValidationTests, self).setUp() resource._register_class("OS::Nova::Server", cloud_server.CloudServer) properties_server = { "image": "CentOS 5.2", "flavor": "256 MB Server", "key_name": "test", "user_data": "wordpress", } self.mockstack = mock.Mock() self.mockstack.has_cache_data.return_value = False self.mockstack.db_resource_get.return_value = None self.rsrcdef = rsrc_defn.ResourceDefinition( "test", cloud_server.CloudServer, properties=properties_server) def test_validate_no_image(self, mock_client, mock_plugin): properties_server = { "flavor": "256 MB Server", "key_name": "test", "user_data": "wordpress", } rsrcdef = rsrc_defn.ResourceDefinition( "test", cloud_server.CloudServer, properties=properties_server) mock_plugin().find_flavor_by_name_or_id.return_value = 1 server = cloud_server.CloudServer("test", rsrcdef, self.mockstack) mock_boot_vol = self.patchobject( server, '_validate_block_device_mapping') mock_boot_vol.return_value = True self.assertIsNone(server.validate()) def test_validate_no_image_bfv(self, mock_client, mock_plugin): properties_server = { "flavor": "256 MB Server", "key_name": "test", "user_data": "wordpress", } rsrcdef = rsrc_defn.ResourceDefinition( "test", cloud_server.CloudServer, properties=properties_server) mock_plugin().find_flavor_by_name_or_id.return_value = 1 server = cloud_server.CloudServer("test", rsrcdef, self.mockstack) mock_boot_vol = self.patchobject( server, '_validate_block_device_mapping') mock_boot_vol.return_value = True mock_flavor = mock.Mock(ram=4) mock_flavor.to_dict.return_value = { 'OS-FLV-WITH-EXT-SPECS:extra_specs': { 'class': 'standard1', }, } mock_plugin().get_flavor.return_value = mock_flavor error = self.assertRaises( exception.StackValidationFailed, server.validate) self.assertEqual( 'Flavor 256 MB Server cannot be booted from volume.', six.text_type(error)) def test_validate_bfv_volume_only(self, mock_client, mock_plugin): mock_plugin().find_flavor_by_name_or_id.return_value = 1 mock_plugin().find_image_by_name_or_id.return_value = 1 server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = { 'OS-FLV-WITH-EXT-SPECS:extra_specs': { 'class': 'memory1', }, } mock_image = mock.Mock(status='ACTIVE', min_ram=2, min_disk=1) mock_image.get.return_value = "memory1" mock_image.__iter__ = mock.Mock(return_value=iter([])) mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.return_value = mock_image error = self.assertRaises( exception.StackValidationFailed, server.validate) self.assertEqual( 'Flavor 256 MB Server must be booted from volume, ' 'but image CentOS 5.2 was also specified.', six.text_type(error)) def test_validate_image_flavor_excluded_class(self, mock_client, mock_plugin): mock_plugin().find_flavor_by_name_or_id.return_value = 1 mock_plugin().find_image_by_name_or_id.return_value = 1 server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_image = mock.Mock(status='ACTIVE', min_ram=2, min_disk=1) mock_image.get.return_value = "!standard1, *" mock_image.__iter__ = mock.Mock(return_value=iter([])) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = { 'OS-FLV-WITH-EXT-SPECS:extra_specs': { 'class': 'standard1', }, } mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.return_value = mock_image error = self.assertRaises( exception.StackValidationFailed, server.validate) self.assertEqual( 'Flavor 256 MB Server cannot be used with image CentOS 5.2.', six.text_type(error)) def test_validate_image_flavor_ok(self, mock_client, mock_plugin): mock_plugin().find_flavor_by_name_or_id.return_value = 1 mock_plugin().find_image_by_name_or_id.return_value = 1 server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_image = mock.Mock(size=1, status='ACTIVE', min_ram=2, min_disk=2) mock_image.get.return_value = "standard1" mock_image.__iter__ = mock.Mock(return_value=iter([])) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = { 'OS-FLV-WITH-EXT-SPECS:extra_specs': { 'class': 'standard1', 'disk_io_index': 1, }, } mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.return_value = mock_image self.assertIsNone(server.validate()) def test_validate_image_flavor_empty_metadata(self, mock_client, mock_plugin): server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_image = mock.Mock(size=1, status='ACTIVE', min_ram=2, min_disk=2) mock_image.get.return_value = "" mock_image.__iter__ = mock.Mock(return_value=iter([])) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = { 'OS-FLV-WITH-EXT-SPECS:extra_specs': { 'flavor_classes': '', }, } mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.return_value = mock_image self.assertIsNone(server.validate()) def test_validate_image_flavor_no_metadata(self, mock_client, mock_plugin): server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_image = mock.Mock(size=1, status='ACTIVE', min_ram=2, min_disk=2) mock_image.get.return_value = None mock_image.__iter__ = mock.Mock(return_value=iter([])) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = {} mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.return_value = mock_image self.assertIsNone(server.validate()) def test_validate_image_flavor_not_base(self, mock_client, mock_plugin): server = cloud_server.CloudServer("test", self.rsrcdef, self.mockstack) mock_image = mock.Mock(size=1, status='ACTIVE', min_ram=2, min_disk=2) mock_image.get.return_value = None mock_image.__iter__ = mock.Mock(return_value=iter( ['base_image_ref'])) mock_image.__getitem__ = mock.Mock(return_value='1234') mock_base_image = mock.Mock(size=1, status='ACTIVE', min_ram=2, min_disk=2) mock_base_image.get.return_value = None mock_base_image.__iter__ = mock.Mock(return_value=iter([])) mock_flavor = mock.Mock(ram=4, disk=4) mock_flavor.to_dict.return_value = {} mock_plugin().get_flavor.return_value = mock_flavor mock_plugin().get_image.side_effect = [mock_image, mock_base_image] self.assertIsNone(server.validate()) heat-10.0.0/contrib/rackspace/rackspace/tests/test_rackspace_dns.py0000666000175100017510000002702413245511554025454 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from heat.common import exception from heat.common import template_format from heat.engine import environment from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.engine import stack as parser from heat.engine import template from heat.tests import common from heat.tests import utils from ..resources import cloud_dns # noqa domain_only_template = ''' { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Dns instance running on Rackspace cloud", "Parameters" : { "UnittestDomain" : { "Description" : "Domain for unit tests", "Type" : "String", "Default" : 'dnsheatunittest.com' }, "dnsttl" : { "Description" : "TTL for the domain", "Type" : "Number", "MinValue" : '301', "Default" : '301' }, "name": { "Description" : "The cloud dns instance name", "Type": "String", "Default": "CloudDNS" } }, "Resources" : { "domain" : { "Type": "Rackspace::Cloud::DNS", "Properties" : { "name" : "dnsheatunittest.com", "emailAddress" : "admin@dnsheatunittest.com", "ttl" : 3600, "comment" : "Testing Cloud DNS integration with Heat" } } } } ''' class FakeDnsInstance(object): def __init__(self): self.id = 4 self.resource_id = 4 def get(self): pass def delete(self): pass class RackspaceDnsTest(common.HeatTestCase): def setUp(self): super(RackspaceDnsTest, self).setUp() # Test environment may not have pyrax client library installed and if # pyrax is not installed resource class would not be registered. # So register resource provider class explicitly for unit testing. resource._register_class("Rackspace::Cloud::DNS", cloud_dns.CloudDns) self.create_domain_only_args = { "name": 'dnsheatunittest.com', "emailAddress": 'admin@dnsheatunittest.com', "ttl": 3600, "comment": 'Testing Cloud DNS integration with Heat', "records": None } self.update_domain_only_args = { "emailAddress": 'updatedEmail@example.com', "ttl": 5555, "comment": 'updated comment' } def _setup_test_cloud_dns_instance(self, name, parsed_t): stack_name = '%s_stack' % name t = parsed_t templ = template.Template( t, env=environment.Environment({'name': 'test'})) self.stack = parser.Stack(utils.dummy_context(), stack_name, templ, stack_id=str(uuid.uuid4())) instance = cloud_dns.CloudDns( '%s_name' % name, templ.resource_definitions(self.stack)['domain'], self.stack) return instance def _stubout_create(self, instance, fake_dnsinstance, **create_args): mock_client = self.m.CreateMockAnything() self.m.StubOutWithMock(instance, 'cloud_dns') instance.cloud_dns().AndReturn(mock_client) self.m.StubOutWithMock(mock_client, "create") mock_client.create(**create_args).AndReturn(fake_dnsinstance) self.m.ReplayAll() def _stubout_update( self, instance, fake_dnsinstance, updateRecords=None, **update_args): mock_client = self.m.CreateMockAnything() self.m.StubOutWithMock(instance, 'cloud_dns') instance.cloud_dns().AndReturn(mock_client) self.m.StubOutWithMock(mock_client, "get") mock_domain = self.m.CreateMockAnything() mock_client.get(fake_dnsinstance.resource_id).AndReturn(mock_domain) self.m.StubOutWithMock(mock_domain, "update") mock_domain.update(**update_args).AndReturn(fake_dnsinstance) if updateRecords: fake_records = list() mock_domain.list_records().AndReturn(fake_records) mock_domain.add_records([{ 'comment': None, 'priority': None, 'type': 'A', 'name': 'ftp.example.com', 'data': '192.0.2.8', 'ttl': 3600}]) self.m.ReplayAll() def _get_create_args_with_comments(self, record): record_with_comment = [dict(record[0])] record_with_comment[0]["comment"] = None create_record_args = dict() create_record_args['records'] = record_with_comment create_args = dict( list(self.create_domain_only_args.items()) + list(create_record_args.items())) return create_args def test_create_domain_only(self): """Test domain create only without any records.""" fake_dns_instance = FakeDnsInstance() t = template_format.parse(domain_only_template) instance = self._setup_test_cloud_dns_instance('dnsinstance_create', t) create_args = self.create_domain_only_args self._stubout_create(instance, fake_dns_instance, **create_args) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_create_domain_with_a_record(self): """Test domain create with an A record. This should not have a priority field. """ fake_dns_instance = FakeDnsInstance() t = template_format.parse(domain_only_template) a_record = [{ "type": "A", "name": "ftp.example.com", "data": "192.0.2.8", "ttl": 3600 }] t['Resources']['domain']['Properties']['records'] = a_record instance = self._setup_test_cloud_dns_instance('dnsinstance_create', t) create_args = self._get_create_args_with_comments(a_record) self._stubout_create(instance, fake_dns_instance, **create_args) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_create_domain_with_mx_record(self): """Test domain create with an MX record. This should have a priority field. """ fake_dns_instance = FakeDnsInstance() t = template_format.parse(domain_only_template) mx_record = [{ "type": "MX", "name": "example.com", "data": "mail.example.com", "priority": 5, "ttl": 3600 }] t['Resources']['domain']['Properties']['records'] = mx_record instance = self._setup_test_cloud_dns_instance('dnsinstance_create', t) create_args = self._get_create_args_with_comments(mx_record) self._stubout_create(instance, fake_dns_instance, **create_args) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_check(self): t = template_format.parse(domain_only_template) instance = self._setup_test_cloud_dns_instance('dnsinstance_create', t) mock_get = mock.Mock() instance.cloud_dns = mock.Mock() instance.cloud_dns.return_value.get = mock_get scheduler.TaskRunner(instance.check)() self.assertEqual('CHECK', instance.action) self.assertEqual('COMPLETE', instance.status) mock_get.side_effect = cloud_dns.NotFound('boom') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(instance.check)) self.assertEqual('CHECK', instance.action) self.assertEqual('FAILED', instance.status) self.assertIn('boom', str(exc)) def test_update(self, updateRecords=None): """Helper function for testing domain updates.""" fake_dns_instance = FakeDnsInstance() t = template_format.parse(domain_only_template) instance = self._setup_test_cloud_dns_instance('dnsinstance_update', t) instance.resource_id = 4 update_args = self.update_domain_only_args self._stubout_update( instance, fake_dns_instance, updateRecords, **update_args) uprops = dict(instance.properties) uprops.update({ 'emailAddress': 'updatedEmail@example.com', 'ttl': 5555, 'comment': 'updated comment', }) if updateRecords: uprops['records'] = updateRecords ut = rsrc_defn.ResourceDefinition(instance.name, instance.type(), uprops) instance.state_set(instance.CREATE, instance.COMPLETE) scheduler.TaskRunner(instance.update, ut)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() def test_update_domain_only(self): """Test domain update without any records.""" self.test_update() def test_update_domain_with_a_record(self): """Test domain update with an A record.""" a_record = [{'type': 'A', 'name': 'ftp.example.com', 'data': '192.0.2.8', 'ttl': 3600}] self.test_update(updateRecords=a_record) def test_update_record_only(self): """Helper function for testing domain updates.""" fake_dns_instance = FakeDnsInstance() t = template_format.parse(domain_only_template) instance = self._setup_test_cloud_dns_instance('dnsinstance_update', t) instance.resource_id = 4 update_records = [{'type': 'A', 'name': 'ftp.example.com', 'data': '192.0.2.8', 'ttl': 3600}] mock_client = self.m.CreateMockAnything() self.m.StubOutWithMock(instance, 'cloud_dns') instance.cloud_dns().AndReturn(mock_client) self.m.StubOutWithMock(mock_client, "get") mock_domain = self.m.CreateMockAnything() mock_client.get(fake_dns_instance.resource_id).AndReturn(mock_domain) # mock_domain.update shouldn't be called in this scenario, so # stub it out but don't record a call to it self.m.StubOutWithMock(mock_domain, "update") fake_records = list() mock_domain.list_records().AndReturn(fake_records) mock_domain.add_records([{ 'comment': None, 'priority': None, 'type': 'A', 'name': 'ftp.example.com', 'data': '192.0.2.8', 'ttl': 3600}]) self.m.ReplayAll() uprops = dict(instance.properties) uprops['records'] = update_records ut = rsrc_defn.ResourceDefinition(instance.name, instance.type(), uprops) instance.state_set(instance.CREATE, instance.COMPLETE) scheduler.TaskRunner(instance.update, ut)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) self.m.VerifyAll() heat-10.0.0/contrib/rackspace/rackspace/tests/test_lb_node.py0000666000175100017510000002654613245511554024266 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from heat.engine import rsrc_defn from heat.tests import common from ..resources import lb_node # noqa from ..resources.lb_node import ( # noqa LoadbalancerDeleted, NotFound, NodeNotFound) from .test_cloud_loadbalancer import FakeNode # noqa class LBNode(lb_node.LBNode): @classmethod def is_service_available(cls, context): return (True, None) class LBNodeTest(common.HeatTestCase): def setUp(self): super(LBNodeTest, self).setUp() self.mockstack = mock.Mock() self.mockstack.has_cache_data.return_value = False self.mockstack.db_resource_get.return_value = None self.mockclient = mock.Mock() self.mockstack.clients.client.return_value = self.mockclient self.def_props = { LBNode.LOAD_BALANCER: 'some_lb_id', LBNode.DRAINING_TIMEOUT: 60, LBNode.ADDRESS: 'some_ip', LBNode.PORT: 80, LBNode.CONDITION: 'ENABLED', LBNode.TYPE: 'PRIMARY', LBNode.WEIGHT: None, } self.resource_def = rsrc_defn.ResourceDefinition( "test", LBNode, properties=self.def_props) self.resource = LBNode("test", self.resource_def, self.mockstack) self.resource.resource_id = 12345 def test_create(self): self.resource.resource_id = None fake_lb = mock.Mock() fake_lb.add_nodes.return_value = (None, {'nodes': [{'id': 12345}]}) self.mockclient.get.return_value = fake_lb fake_node = mock.Mock() self.mockclient.Node.return_value = fake_node self.resource.check_create_complete() self.mockclient.get.assert_called_once_with('some_lb_id') self.mockclient.Node.assert_called_once_with( address='some_ip', port=80, condition='ENABLED', type='PRIMARY', weight=0) fake_lb.add_nodes.assert_called_once_with([fake_node]) self.assertEqual(self.resource.resource_id, 12345) def test_create_lb_not_found(self): self.mockclient.get.side_effect = NotFound() self.assertRaises(NotFound, self.resource.check_create_complete) def test_create_lb_deleted(self): fake_lb = mock.Mock() fake_lb.id = 1111 fake_lb.status = 'DELETED' self.mockclient.get.return_value = fake_lb exc = self.assertRaises(LoadbalancerDeleted, self.resource.check_create_complete) self.assertEqual("The Load Balancer (ID 1111) has been deleted.", str(exc)) def test_create_lb_pending_delete(self): fake_lb = mock.Mock() fake_lb.id = 1111 fake_lb.status = 'PENDING_DELETE' self.mockclient.get.return_value = fake_lb exc = self.assertRaises(LoadbalancerDeleted, self.resource.check_create_complete) self.assertEqual("The Load Balancer (ID 1111) has been deleted.", str(exc)) def test_handle_update_method(self): self.assertEqual(self.resource.handle_update(None, None, 'foo'), 'foo') def _test_update(self, diff): fake_lb = mock.Mock() fake_node = FakeNode(id=12345, address='a', port='b') fake_node.update = mock.Mock() expected_node = FakeNode(id=12345, address='a', port='b', **diff) expected_node.update = fake_node.update fake_lb.nodes = [fake_node] self.mockclient.get.return_value = fake_lb self.assertFalse(self.resource.check_update_complete(prop_diff=diff)) self.mockclient.get.assert_called_once_with('some_lb_id') fake_node.update.assert_called_once_with() self.assertEqual(fake_node, expected_node) def test_update_condition(self): self._test_update({'condition': 'DISABLED'}) def test_update_weight(self): self._test_update({'weight': 100}) def test_update_type(self): self._test_update({'type': 'SECONDARY'}) def test_update_multiple(self): self._test_update({'condition': 'DISABLED', 'weight': 100, 'type': 'SECONDARY'}) def test_update_finished(self): fake_lb = mock.Mock() fake_node = FakeNode(id=12345, address='a', port='b', condition='ENABLED') fake_node.update = mock.Mock() expected_node = FakeNode(id=12345, address='a', port='b', condition='ENABLED') expected_node.update = fake_node.update fake_lb.nodes = [fake_node] self.mockclient.get.return_value = fake_lb diff = {'condition': 'ENABLED'} self.assertTrue(self.resource.check_update_complete(prop_diff=diff)) self.mockclient.get.assert_called_once_with('some_lb_id') self.assertFalse(fake_node.update.called) self.assertEqual(fake_node, expected_node) def test_update_lb_not_found(self): self.mockclient.get.side_effect = NotFound() diff = {'condition': 'ENABLED'} self.assertRaises(NotFound, self.resource.check_update_complete, prop_diff=diff) def test_update_lb_deleted(self): fake_lb = mock.Mock() fake_lb.id = 1111 fake_lb.status = 'DELETED' self.mockclient.get.return_value = fake_lb diff = {'condition': 'ENABLED'} exc = self.assertRaises(LoadbalancerDeleted, self.resource.check_update_complete, prop_diff=diff) self.assertEqual("The Load Balancer (ID 1111) has been deleted.", str(exc)) def test_update_lb_pending_delete(self): fake_lb = mock.Mock() fake_lb.id = 1111 fake_lb.status = 'PENDING_DELETE' self.mockclient.get.return_value = fake_lb diff = {'condition': 'ENABLED'} exc = self.assertRaises(LoadbalancerDeleted, self.resource.check_update_complete, prop_diff=diff) self.assertEqual("The Load Balancer (ID 1111) has been deleted.", str(exc)) def test_update_node_not_found(self): fake_lb = mock.Mock() fake_lb.id = 4444 fake_lb.nodes = [] self.mockclient.get.return_value = fake_lb diff = {'condition': 'ENABLED'} exc = self.assertRaises(NodeNotFound, self.resource.check_update_complete, prop_diff=diff) self.assertEqual( "Node (ID 12345) not found on Load Balancer (ID 4444).", str(exc)) def test_delete_no_id(self): self.resource.resource_id = None self.assertTrue(self.resource.check_delete_complete(None)) def test_delete_lb_already_deleted(self): self.mockclient.get.side_effect = NotFound() self.assertTrue(self.resource.check_delete_complete(None)) self.mockclient.get.assert_called_once_with('some_lb_id') def test_delete_lb_deleted_status(self): fake_lb = mock.Mock() fake_lb.status = 'DELETED' self.mockclient.get.return_value = fake_lb self.assertTrue(self.resource.check_delete_complete(None)) self.mockclient.get.assert_called_once_with('some_lb_id') def test_delete_lb_pending_delete_status(self): fake_lb = mock.Mock() fake_lb.status = 'PENDING_DELETE' self.mockclient.get.return_value = fake_lb self.assertTrue(self.resource.check_delete_complete(None)) self.mockclient.get.assert_called_once_with('some_lb_id') def test_delete_node_already_deleted(self): fake_lb = mock.Mock() fake_lb.nodes = [] self.mockclient.get.return_value = fake_lb self.assertTrue(self.resource.check_delete_complete(None)) self.mockclient.get.assert_called_once_with('some_lb_id') @mock.patch.object(lb_node.timeutils, 'utcnow') def test_drain_before_delete(self, mock_utcnow): fake_lb = mock.Mock() fake_node = FakeNode(id=12345, address='a', port='b') expected_node = FakeNode(id=12345, address='a', port='b', condition='DRAINING') fake_node.update = mock.Mock() expected_node.update = fake_node.update fake_node.delete = mock.Mock() expected_node.delete = fake_node.delete fake_lb.nodes = [fake_node] self.mockclient.get.return_value = fake_lb now = datetime.datetime.utcnow() mock_utcnow.return_value = now self.assertFalse(self.resource.check_delete_complete(now)) self.mockclient.get.assert_called_once_with('some_lb_id') fake_node.update.assert_called_once_with() self.assertFalse(fake_node.delete.called) self.assertEqual(fake_node, expected_node) @mock.patch.object(lb_node.timeutils, 'utcnow') def test_delete_waiting(self, mock_utcnow): fake_lb = mock.Mock() fake_node = FakeNode(id=12345, address='a', port='b', condition='DRAINING') expected_node = FakeNode(id=12345, address='a', port='b', condition='DRAINING') fake_node.update = mock.Mock() expected_node.update = fake_node.update fake_node.delete = mock.Mock() expected_node.delete = fake_node.delete fake_lb.nodes = [fake_node] self.mockclient.get.return_value = fake_lb now = datetime.datetime.utcnow() now_plus_30 = now + datetime.timedelta(seconds=30) mock_utcnow.return_value = now_plus_30 self.assertFalse(self.resource.check_delete_complete(now)) self.mockclient.get.assert_called_once_with('some_lb_id') self.assertFalse(fake_node.update.called) self.assertFalse(fake_node.delete.called) self.assertEqual(fake_node, expected_node) @mock.patch.object(lb_node.timeutils, 'utcnow') def test_delete_finishing(self, mock_utcnow): fake_lb = mock.Mock() fake_node = FakeNode(id=12345, address='a', port='b', condition='DRAINING') expected_node = FakeNode(id=12345, address='a', port='b', condition='DRAINING') fake_node.update = mock.Mock() expected_node.update = fake_node.update fake_node.delete = mock.Mock() expected_node.delete = fake_node.delete fake_lb.nodes = [fake_node] self.mockclient.get.return_value = fake_lb now = datetime.datetime.utcnow() now_plus_62 = now + datetime.timedelta(seconds=62) mock_utcnow.return_value = now_plus_62 self.assertFalse(self.resource.check_delete_complete(now)) self.mockclient.get.assert_called_once_with('some_lb_id') self.assertFalse(fake_node.update.called) self.assertTrue(fake_node.delete.called) self.assertEqual(fake_node, expected_node) heat-10.0.0/contrib/rackspace/rackspace/tests/__init__.py0000666000175100017510000000007313245511554023347 0ustar zuulzuul00000000000000import sys from mox3 import mox sys.modules['mox'] = mox heat-10.0.0/contrib/rackspace/rackspace/tests/test_auto_scale.py0000666000175100017510000013123013245511554024766 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import itertools import mock import six from heat.common import exception from heat.common import template_format from heat.engine.clients.os import glance from heat.engine.clients.os import nova from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils from ..resources import auto_scale # noqa class FakeScalingGroup(object): """A fake implementation of pyrax's ScalingGroup object.""" def __init__(self, id, **kwargs): self.id = id self.kwargs = kwargs class FakeScalePolicy(object): """A fake implementation of pyrax's AutoScalePolicy object.""" def __init__(self, id, **kwargs): self.id = id self.kwargs = kwargs class FakeWebHook(object): """A fake implementation of pyrax's AutoScaleWebhook object.""" def __init__(self, id, **kwargs): self.id = id self.kwargs = kwargs self.links = [ {'rel': 'self', 'href': 'self-url'}, {'rel': 'capability', 'href': 'capability-url'}] class FakeAutoScale(object): """A fake implementation of pyrax's autoscale client.""" def __init__(self): self.groups = {} self.policies = {} self.webhooks = {} self.group_counter = itertools.count() self.policy_counter = itertools.count() self.webhook_counter = itertools.count() def create(self, **kwargs): """Create a scaling group.""" new_id = str(next(self.group_counter)) fsg = FakeScalingGroup(new_id, **kwargs) self.groups[new_id] = fsg return fsg def _check_args(self, kwargs, allowed): for parameter in kwargs: if parameter not in allowed: raise TypeError("unexpected argument %r" % (parameter,)) def _get_group(self, id): if id not in self.groups: raise auto_scale.NotFound("Group %s not found!" % (id,)) return self.groups[id] def _get_policy(self, id): if id not in self.policies: raise auto_scale.NotFound("Policy %s not found!" % (id,)) return self.policies[id] def _get_webhook(self, webhook_id): if webhook_id not in self.webhooks: raise auto_scale.NotFound( "Webhook %s doesn't exist!" % (webhook_id,)) return self.webhooks[webhook_id] def replace(self, group_id, **kwargs): """Update the groupConfiguration section of a scaling group.""" allowed = ['name', 'cooldown', 'min_entities', 'max_entities', 'metadata'] self._check_args(kwargs, allowed) self._get_group(group_id).kwargs = kwargs def replace_launch_config(self, group_id, **kwargs): """Update the launch configuration on a scaling group.""" if kwargs.get('launch_config_type') == 'launch_server': allowed = ['launch_config_type', 'server_name', 'image', 'flavor', 'disk_config', 'metadata', 'personality', 'networks', 'load_balancers', 'key_name', 'user_data', 'config_drive'] elif kwargs.get('launch_config_type') == 'launch_stack': allowed = ['launch_config_type', 'template', 'template_url', 'disable_rollback', 'environment', 'files', 'parameters', 'timeout_mins'] self._check_args(kwargs, allowed) self._get_group(group_id).kwargs = kwargs def delete(self, group_id): """Delete the group, if the min entities and max entities are 0.""" group = self._get_group(group_id) if (group.kwargs['min_entities'] > 0 or group.kwargs['max_entities'] > 0): raise Exception("Can't delete yet!") del self.groups[group_id] def add_policy(self, **kwargs): """Create and store a FakeScalePolicy.""" allowed = [ 'scaling_group', 'name', 'policy_type', 'cooldown', 'change', 'is_percent', 'desired_capacity', 'args'] self._check_args(kwargs, allowed) policy_id = str(next(self.policy_counter)) policy = FakeScalePolicy(policy_id, **kwargs) self.policies[policy_id] = policy return policy def replace_policy(self, scaling_group, policy, **kwargs): allowed = [ 'name', 'policy_type', 'cooldown', 'change', 'is_percent', 'desired_capacity', 'args'] self._check_args(kwargs, allowed) policy = self._get_policy(policy) assert policy.kwargs['scaling_group'] == scaling_group kwargs['scaling_group'] = scaling_group policy.kwargs = kwargs def add_webhook(self, **kwargs): """Create and store a FakeWebHook.""" allowed = ['scaling_group', 'policy', 'name', 'metadata'] self._check_args(kwargs, allowed) webhook_id = str(next(self.webhook_counter)) webhook = FakeWebHook(webhook_id, **kwargs) self.webhooks[webhook_id] = webhook return webhook def delete_policy(self, scaling_group, policy): """Delete a policy, if it exists.""" if policy not in self.policies: raise auto_scale.NotFound("Policy %s doesn't exist!" % (policy,)) assert self.policies[policy].kwargs['scaling_group'] == scaling_group del self.policies[policy] def delete_webhook(self, scaling_group, policy, webhook_id): """Delete a webhook, if it exists.""" webhook = self._get_webhook(webhook_id) assert webhook.kwargs['scaling_group'] == scaling_group assert webhook.kwargs['policy'] == policy del self.webhooks[webhook_id] def replace_webhook(self, scaling_group, policy, webhook, name=None, metadata=None): webhook = self._get_webhook(webhook) assert webhook.kwargs['scaling_group'] == scaling_group assert webhook.kwargs['policy'] == policy webhook.kwargs['name'] = name webhook.kwargs['metadata'] = metadata class ScalingGroupTest(common.HeatTestCase): server_template = template_format.parse(''' HeatTemplateFormatVersion: "2012-12-12" Description: "Rackspace Auto Scale" Parameters: {} Resources: my_group: Type: Rackspace::AutoScale::Group Properties: groupConfiguration: name: "My Group" cooldown: 60 minEntities: 1 maxEntities: 25 metadata: group: metadata launchConfiguration: type: "launch_server" args: server: name: autoscaled-server flavorRef: flavor-ref imageRef: image-ref key_name: my-key metadata: server: metadata personality: /tmp/testfile: "dGVzdCBjb250ZW50" networks: - uuid: "00000000-0000-0000-0000-000000000000" - uuid: "11111111-1111-1111-1111-111111111111" loadBalancers: - loadBalancerId: 234 port: 80 ''') stack_template = template_format.parse(''' HeatTemplateFormatVersion: "2012-12-12" Description: "Rackspace Auto Scale" Parameters: {} Resources: my_group: Type: Rackspace::AutoScale::Group Properties: groupConfiguration: name: "My Group" cooldown: 60 minEntities: 1 maxEntities: 25 metadata: group: metadata launchConfiguration: type: launch_stack args: stack: template: | heat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString disable_rollback: False environment: parameters: image: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM) resource_registry: Heat::InstallConfigAgent: https://myhost.com/bootconfig.yaml files: fileA.yaml: Contents of the file file:///usr/fileB.template: Contents of the file parameters: flavor: 4 GB Performance timeout_mins: 30 ''') def setUp(self): super(ScalingGroupTest, self).setUp() for res_name, res_class in auto_scale.resource_mapping().items(): resource._register_class(res_name, res_class) self.fake_auto_scale = FakeAutoScale() self.patchobject(auto_scale.Group, 'auto_scale', return_value=self.fake_auto_scale) # mock nova and glance client methods to satisfy constraints mock_im = self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id') mock_im.return_value = 'image-ref' mock_fl = self.patchobject(nova.NovaClientPlugin, 'find_flavor_by_name_or_id') mock_fl.return_value = 'flavor-ref' def _setup_test_stack(self, template=None): if template is None: template = self.server_template self.stack = utils.parse_stack(template) self.stack.create() self.assertEqual( ('CREATE', 'COMPLETE'), self.stack.state, self.stack.status_reason) def test_group_create_server(self): """Creating a group passes all the correct arguments to pyrax. Also saves the group ID as the resource ID. """ self._setup_test_stack() self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( { 'cooldown': 60, 'config_drive': False, 'user_data': None, 'disk_config': None, 'flavor': 'flavor-ref', 'image': 'image-ref', 'load_balancers': [{ 'loadBalancerId': 234, 'port': 80, }], 'key_name': "my-key", 'launch_config_type': u'launch_server', 'max_entities': 25, 'group_metadata': {'group': 'metadata'}, 'metadata': {'server': 'metadata'}, 'min_entities': 1, 'name': 'My Group', 'networks': [{'uuid': '00000000-0000-0000-0000-000000000000'}, {'uuid': '11111111-1111-1111-1111-111111111111'}], 'personality': [{ 'path': u'/tmp/testfile', 'contents': u'dGVzdCBjb250ZW50'}], 'server_name': u'autoscaled-server'}, self.fake_auto_scale.groups['0'].kwargs) resource = self.stack['my_group'] self.assertEqual('0', resource.FnGetRefId()) def test_group_create_stack(self): """Creating a group passes all the correct arguments to pyrax. Also saves the group ID as the resource ID. """ self._setup_test_stack(self.stack_template) self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( { 'cooldown': 60, 'min_entities': 1, 'max_entities': 25, 'group_metadata': {'group': 'metadata'}, 'name': 'My Group', 'launch_config_type': u'launch_stack', 'template': ( '''heat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString '''), 'template_url': None, 'disable_rollback': False, 'environment': { 'parameters': { 'image': 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)', }, 'resource_registry': { 'Heat::InstallConfigAgent': ('https://myhost.com/' 'bootconfig.yaml') } }, 'files': { 'fileA.yaml': 'Contents of the file', 'file:///usr/fileB.template': 'Contents of the file' }, 'parameters': { 'flavor': '4 GB Performance', }, 'timeout_mins': 30, }, self.fake_auto_scale.groups['0'].kwargs ) resource = self.stack['my_group'] self.assertEqual('0', resource.FnGetRefId()) def test_group_create_no_personality(self): template = template_format.parse(''' HeatTemplateFormatVersion: "2012-12-12" Description: "Rackspace Auto Scale" Parameters: {} Resources: my_group: Type: Rackspace::AutoScale::Group Properties: groupConfiguration: name: "My Group" cooldown: 60 minEntities: 1 maxEntities: 25 metadata: group: metadata launchConfiguration: type: "launch_server" args: server: name: autoscaled-server flavorRef: flavor-ref imageRef: image-ref key_name: my-key metadata: server: metadata networks: - uuid: "00000000-0000-0000-0000-000000000000" - uuid: "11111111-1111-1111-1111-111111111111" ''') self.stack = utils.parse_stack(template) self.stack.create() self.assertEqual( ('CREATE', 'COMPLETE'), self.stack.state, self.stack.status_reason) self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( { 'cooldown': 60, 'config_drive': False, 'user_data': None, 'disk_config': None, 'flavor': 'flavor-ref', 'image': 'image-ref', 'launch_config_type': 'launch_server', 'load_balancers': [], 'key_name': "my-key", 'max_entities': 25, 'group_metadata': {'group': 'metadata'}, 'metadata': {'server': 'metadata'}, 'min_entities': 1, 'name': 'My Group', 'networks': [{'uuid': '00000000-0000-0000-0000-000000000000'}, {'uuid': '11111111-1111-1111-1111-111111111111'}], 'personality': None, 'server_name': u'autoscaled-server'}, self.fake_auto_scale.groups['0'].kwargs) resource = self.stack['my_group'] self.assertEqual('0', resource.FnGetRefId()) def test_check(self): self._setup_test_stack() resource = self.stack['my_group'] mock_get = mock.Mock() resource.auto_scale().get = mock_get scheduler.TaskRunner(resource.check)() self.assertEqual('CHECK', resource.action) self.assertEqual('COMPLETE', resource.status) mock_get.side_effect = auto_scale.NotFound('boom') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(resource.check)) self.assertEqual('CHECK', resource.action) self.assertEqual('FAILED', resource.status) self.assertIn('boom', str(exc)) def test_update_group_config(self): """Updates the groupConfiguration section. Updates the groupConfiguration section in a template results in a pyrax call to update the group configuration. """ self._setup_test_stack() resource = self.stack['my_group'] uprops = copy.deepcopy(dict(resource.properties.data)) uprops['groupConfiguration']['minEntities'] = 5 new_template = rsrc_defn.ResourceDefinition(resource.name, resource.type(), uprops) scheduler.TaskRunner(resource.update, new_template)() self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( 5, self.fake_auto_scale.groups['0'].kwargs['min_entities']) def test_update_launch_config_server(self): """Updates the launchConfigresults section. Updates the launchConfigresults section in a template results in a pyrax call to update the launch configuration. """ self._setup_test_stack() resource = self.stack['my_group'] uprops = copy.deepcopy(dict(resource.properties.data)) lcargs = uprops['launchConfiguration']['args'] lcargs['loadBalancers'] = [{'loadBalancerId': '1', 'port': 80}] new_template = rsrc_defn.ResourceDefinition(resource.name, resource.type(), uprops) scheduler.TaskRunner(resource.update, new_template)() self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( [{'loadBalancerId': 1, 'port': 80}], self.fake_auto_scale.groups['0'].kwargs['load_balancers']) def test_update_launch_config_stack(self): self._setup_test_stack(self.stack_template) resource = self.stack['my_group'] uprops = copy.deepcopy(dict(resource.properties.data)) lcargs = uprops['launchConfiguration']['args'] lcargs['stack']['timeout_mins'] = 60 new_template = rsrc_defn.ResourceDefinition(resource.name, resource.type(), uprops) scheduler.TaskRunner(resource.update, new_template)() self.assertEqual(1, len(self.fake_auto_scale.groups)) self.assertEqual( 60, self.fake_auto_scale.groups['0'].kwargs['timeout_mins']) def test_delete(self): """Deleting a ScalingGroup resource invokes pyrax API to delete it.""" self._setup_test_stack() resource = self.stack['my_group'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.groups) def test_delete_without_backing_group(self): """Resource deletion succeeds, if no backing scaling group exists.""" self._setup_test_stack() resource = self.stack['my_group'] del self.fake_auto_scale.groups['0'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.groups) def test_delete_waits_for_server_deletion(self): """Test case for waiting for successful resource deletion. The delete operation may fail until the servers are really gone; the resource retries until success. """ self._setup_test_stack() delete_counter = itertools.count() def delete(group_id): count = next(delete_counter) if count < 3: raise auto_scale.Forbidden("Not empty!") self.patchobject(self.fake_auto_scale, 'delete', side_effect=delete) resource = self.stack['my_group'] scheduler.TaskRunner(resource.delete)() # It really called delete until it succeeded: self.assertEqual(4, next(delete_counter)) def test_delete_blows_up_on_other_errors(self): """Test case for correct error handling during deletion. Only the Forbidden (403) error is honored as an indicator of pending deletion; other errors cause deletion to fail. """ self._setup_test_stack() def delete(group_id): 1 / 0 self.patchobject(self.fake_auto_scale, 'delete', side_effect=delete) resource = self.stack['my_group'] err = self.assertRaises( exception.ResourceFailure, scheduler.TaskRunner(resource.delete)) self.assertIsInstance(err.exc, ZeroDivisionError) class PolicyTest(common.HeatTestCase): policy_template = template_format.parse(''' HeatTemplateFormatVersion: "2012-12-12" Description: "Rackspace Auto Scale" Parameters: {} Resources: my_policy: Type: Rackspace::AutoScale::ScalingPolicy Properties: group: "my-group-id" name: "+10 on webhook" change: 10 cooldown: 0 type: "webhook" ''') def setUp(self): super(PolicyTest, self).setUp() for res_name, res_class in auto_scale.resource_mapping().items(): resource._register_class(res_name, res_class) self.fake_auto_scale = FakeAutoScale() self.patchobject(auto_scale.ScalingPolicy, 'auto_scale', return_value=self.fake_auto_scale) def _setup_test_stack(self, template): self.stack = utils.parse_stack(template) self.stack.create() self.assertEqual( ('CREATE', 'COMPLETE'), self.stack.state, self.stack.status_reason) def test_create_webhook_change(self): """Creating the resource creates the scaling policy with pyrax. Also sets the resource's ID to {group_id}:{policy_id}. """ self._setup_test_stack(self.policy_template) resource = self.stack['my_policy'] self.assertEqual('my-group-id:0', resource.FnGetRefId()) self.assertEqual( { 'name': '+10 on webhook', 'scaling_group': 'my-group-id', 'change': 10, 'cooldown': 0, 'policy_type': 'webhook'}, self.fake_auto_scale.policies['0'].kwargs) def test_webhook_change_percent(self): """Test case for specified changePercent. When changePercent is specified, it translates to pyrax arguments 'change' and 'is_percent'. """ template = copy.deepcopy(self.policy_template) template['Resources']['my_policy']['Properties']['changePercent'] = 10 del template['Resources']['my_policy']['Properties']['change'] self._setup_test_stack(template) self.assertEqual( { 'name': '+10 on webhook', 'scaling_group': 'my-group-id', 'change': 10, 'is_percent': True, 'cooldown': 0, 'policy_type': 'webhook'}, self.fake_auto_scale.policies['0'].kwargs) def test_webhook_desired_capacity(self): """Test case for desiredCapacity property. The desiredCapacity property translates to the desired_capacity pyrax argument. """ template = copy.deepcopy(self.policy_template) template['Resources']['my_policy']['Properties']['desiredCapacity'] = 1 del template['Resources']['my_policy']['Properties']['change'] self._setup_test_stack(template) self.assertEqual( { 'name': '+10 on webhook', 'scaling_group': 'my-group-id', 'desired_capacity': 1, 'cooldown': 0, 'policy_type': 'webhook'}, self.fake_auto_scale.policies['0'].kwargs) def test_schedule(self): """We can specify schedule-type policies with args.""" template = copy.deepcopy(self.policy_template) props = template['Resources']['my_policy']['Properties'] props['type'] = 'schedule' props['args'] = {'cron': '0 0 0 * *'} self._setup_test_stack(template) self.assertEqual( { 'name': '+10 on webhook', 'scaling_group': 'my-group-id', 'change': 10, 'cooldown': 0, 'policy_type': 'schedule', 'args': {'cron': '0 0 0 * *'}}, self.fake_auto_scale.policies['0'].kwargs) def test_update(self): """Updating the resource calls appropriate update method with pyrax.""" self._setup_test_stack(self.policy_template) resource = self.stack['my_policy'] uprops = copy.deepcopy(dict(resource.properties.data)) uprops['changePercent'] = 50 del uprops['change'] template = rsrc_defn.ResourceDefinition(resource.name, resource.type(), uprops) scheduler.TaskRunner(resource.update, template)() self.assertEqual( { 'name': '+10 on webhook', 'scaling_group': 'my-group-id', 'change': 50, 'is_percent': True, 'cooldown': 0, 'policy_type': 'webhook'}, self.fake_auto_scale.policies['0'].kwargs) def test_delete(self): """Deleting the resource deletes the policy with pyrax.""" self._setup_test_stack(self.policy_template) resource = self.stack['my_policy'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.policies) def test_delete_policy_non_existent(self): """Test case for deleting resource without backing policy. Deleting a resource for which there is no backing policy succeeds silently. """ self._setup_test_stack(self.policy_template) resource = self.stack['my_policy'] del self.fake_auto_scale.policies['0'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.policies) class WebHookTest(common.HeatTestCase): webhook_template = template_format.parse(''' HeatTemplateFormatVersion: "2012-12-12" Description: "Rackspace Auto Scale" Parameters: {} Resources: my_webhook: Type: Rackspace::AutoScale::WebHook Properties: policy: my-group-id:my-policy-id name: "exec my policy" metadata: a: b ''') def setUp(self): super(WebHookTest, self).setUp() for res_name, res_class in auto_scale.resource_mapping().items(): resource._register_class(res_name, res_class) self.fake_auto_scale = FakeAutoScale() self.patchobject(auto_scale.WebHook, 'auto_scale', return_value=self.fake_auto_scale) def _setup_test_stack(self, template): self.stack = utils.parse_stack(template) self.stack.create() self.assertEqual( ('CREATE', 'COMPLETE'), self.stack.state, self.stack.status_reason) def test_create(self): """Creates a webhook with pyrax and makes attributes available.""" self._setup_test_stack(self.webhook_template) resource = self.stack['my_webhook'] self.assertEqual( { 'name': 'exec my policy', 'scaling_group': 'my-group-id', 'policy': 'my-policy-id', 'metadata': {'a': 'b'}}, self.fake_auto_scale.webhooks['0'].kwargs) self.assertEqual("self-url", resource.FnGetAtt("executeUrl")) self.assertEqual("capability-url", resource.FnGetAtt("capabilityUrl")) def test_failed_create(self): """When a create fails, getting the attributes returns None.""" template = copy.deepcopy(self.webhook_template) template['Resources']['my_webhook']['Properties']['policy'] = 'foobar' self.stack = utils.parse_stack(template) self.stack.create() resource = self.stack['my_webhook'] self.assertIsNone(resource.FnGetAtt('capabilityUrl')) def test_update(self): self._setup_test_stack(self.webhook_template) resource = self.stack['my_webhook'] uprops = copy.deepcopy(dict(resource.properties.data)) uprops['metadata']['a'] = 'different!' uprops['name'] = 'newhook' template = rsrc_defn.ResourceDefinition(resource.name, resource.type(), uprops) scheduler.TaskRunner(resource.update, template)() self.assertEqual( { 'name': 'newhook', 'scaling_group': 'my-group-id', 'policy': 'my-policy-id', 'metadata': {'a': 'different!'}}, self.fake_auto_scale.webhooks['0'].kwargs) def test_delete(self): """Deleting the resource deletes the webhook with pyrax.""" self._setup_test_stack(self.webhook_template) resource = self.stack['my_webhook'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.webhooks) def test_delete_without_backing_webhook(self): """Test case for deleting resource without backing webhook. Deleting a resource for which there is no backing webhook succeeds silently. """ self._setup_test_stack(self.webhook_template) resource = self.stack['my_webhook'] del self.fake_auto_scale.webhooks['0'] scheduler.TaskRunner(resource.delete)() self.assertEqual({}, self.fake_auto_scale.webhooks) @mock.patch.object(resource.Resource, "client_plugin") @mock.patch.object(resource.Resource, "client") class AutoScaleGroupValidationTests(common.HeatTestCase): def setUp(self): super(AutoScaleGroupValidationTests, self).setUp() self.mockstack = mock.Mock() self.mockstack.has_cache_data.return_value = False self.mockstack.db_resource_get.return_value = None def test_validate_no_rcv3_pool(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "loadBalancers": [{ "loadBalancerId": 'not integer!', }], "server": { "name": "sdfsdf", "flavorRef": "ffdgdf", "imageRef": "image-ref", }, }, }, } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) mock_client().list_load_balancer_pools.return_value = [] error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertEqual( 'Could not find RackConnectV3 pool with id not integer!: ', six.text_type(error)) def test_validate_rcv3_pool_found(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "loadBalancers": [{ "loadBalancerId": 'pool_exists', }], "server": { "name": "sdfsdf", "flavorRef": "ffdgdf", "imageRef": "image-ref", }, }, }, } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) mock_client().list_load_balancer_pools.return_value = [ mock.Mock(id='pool_exists'), ] self.assertIsNone(asg.validate()) def test_validate_no_lb_specified(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "server": { "name": "sdfsdf", "flavorRef": "ffdgdf", "imageRef": "image-ref", }, }, }, } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) self.assertIsNone(asg.validate()) def test_validate_launch_stack(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_stack", "args": { "stack": { 'template': ( '''heat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString '''), 'template_url': None, 'disable_rollback': False, 'environment': { 'parameters': { 'image': 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)', }, 'resource_registry': { 'Heat::InstallConfigAgent': ( 'https://myhost.com/bootconfig.yaml') } }, 'files': { 'fileA.yaml': 'Contents of the file', 'file:///usr/fileB.yaml': 'Contents of the file' }, 'parameters': { 'flavor': '4 GB Performance', }, 'timeout_mins': 30, } } } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) self.assertIsNone(asg.validate()) def test_validate_launch_server_and_stack(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "server": { "name": "sdfsdf", "flavorRef": "ffdgdf", "imageRef": "image-ref", }, "stack": { 'template': ( '''heat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString '''), 'template_url': None, 'disable_rollback': False, 'environment': { 'parameters': { 'image': 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)', }, 'resource_registry': { 'Heat::InstallConfigAgent': ( 'https://myhost.com/bootconfig.yaml') } }, 'files': { 'fileA.yaml': 'Contents of the file', 'file:///usr/fileB.yaml': 'Contents of the file' }, 'parameters': { 'flavor': '4 GB Performance', }, 'timeout_mins': 30, } } } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertIn( 'Must provide one of server or stack in launchConfiguration', six.text_type(error)) def test_validate_no_launch_server_or_stack(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": {} } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertIn( 'Must provide one of server or stack in launchConfiguration', six.text_type(error)) def test_validate_stack_template_and_template_url(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "stack": { 'template': ( '''heat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString '''), 'template_url': 'https://myhost.com/template.yaml', } } } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertIn( 'Must provide one of template or template_url', six.text_type(error)) def test_validate_stack_no_template_or_template_url(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_server", "args": { "stack": { 'disable_rollback': False, 'environment': { 'parameters': { 'image': 'Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)', }, 'resource_registry': { 'Heat::InstallConfigAgent': ( 'https://myhost.com/bootconfig.yaml') } }, 'files': { 'fileA.yaml': 'Contents of the file', 'file:///usr/fileB.yaml': 'Contents of the file' }, 'parameters': { 'flavor': '4 GB Performance', }, 'timeout_mins': 30, } } } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertIn( 'Must provide one of template or template_url', six.text_type(error)) def test_validate_invalid_template(self, mock_client, mock_plugin): asg_properties = { "groupConfiguration": { "name": "My Group", "cooldown": 60, "minEntities": 1, "maxEntities": 25, "metadata": { "group": "metadata", }, }, "launchConfiguration": { "type": "launch_stack", "args": { "stack": { 'template': ( '''SJDADKJAJKLSheat_template_version: 2015-10-15 description: This is a Heat template parameters: image: default: cirros-0.3.4-x86_64-uec type: string flavor: default: m1.tiny type: string resources: rand: type: OS::Heat::RandomString '''), 'template_url': None, 'disable_rollback': False, 'environment': {'Foo': 'Bar'}, 'files': { 'fileA.yaml': 'Contents of the file', 'file:///usr/fileB.yaml': 'Contents of the file' }, 'parameters': { 'flavor': '4 GB Performance', }, 'timeout_mins': 30, } } } } rsrcdef = rsrc_defn.ResourceDefinition( "test", auto_scale.Group, properties=asg_properties) asg = auto_scale.Group("test", rsrcdef, self.mockstack) error = self.assertRaises( exception.StackValidationFailed, asg.validate) self.assertIn( 'Encountered error while loading template:', six.text_type(error)) heat-10.0.0/contrib/rackspace/rackspace/tests/test_cloudnetworks.py0000666000175100017510000001510313245511554025552 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import uuid import mock from oslo_utils import reflection import six from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine import scheduler from heat.tests import common from heat.tests import utils from ..resources import cloudnetworks # noqa try: from pyrax.exceptions import NotFound # noqa except ImportError: from ..resources.cloudnetworks import NotFound # noqa class FakeNetwork(object): def __init__(self, client, label="test_network", cidr="172.16.0.0/24"): self.client = client self.label = label self.cidr = cidr self.id = str(uuid.uuid4()) def _is_deleted(self): return (self.client and self.id not in [nw.id for nw in self.client.networks]) def get(self): if self._is_deleted(): raise NotFound("I am deleted") def delete(self): self.client._delete(self) class FakeClient(object): def __init__(self): self.networks = [] def create(self, label=None, cidr=None): nw = FakeNetwork(self, label=label, cidr=cidr) self.networks.append(nw) return nw def get(self, nwid): for nw in self.networks: if nw.id == nwid: return nw raise NotFound("No network %s" % nwid) def _delete(self, nw): try: self.networks.remove(nw) except ValueError: pass class FakeClientRaiseException(FakeClient): def create(self, label=None, cidr=None): raise Exception def get(self, nwid): raise Exception @mock.patch.object(cloudnetworks.CloudNetwork, "cloud_networks") class CloudNetworkTest(common.HeatTestCase): _template = template_format.parse(""" heat_template_version: 2013-05-23 description: Test stack for Rackspace Cloud Networks resources: cnw: type: Rackspace::Cloud::Network properties: label: test_network cidr: 172.16.0.0/24 """) def setUp(self): super(CloudNetworkTest, self).setUp() resource._register_class("Rackspace::Cloud::Network", cloudnetworks.CloudNetwork) def _parse_stack(self): class_name = reflection.get_class_name(self, fully_qualified=False) self.stack = utils.parse_stack(self._template, stack_name=class_name) def _setup_stack(self, mock_client, *args): self.fake_cnw = FakeClient(*args) mock_client.return_value = self.fake_cnw self._parse_stack() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) res = self.stack['cnw'] self.assertEqual((res.CREATE, res.COMPLETE), res.state) def test_attributes(self, mock_client): self._setup_stack(mock_client) res = self.stack['cnw'] template_resource = self._template['resources']['cnw'] expect_label = template_resource['properties']['label'] expect_cidr = template_resource['properties']['cidr'] self.assertEqual(expect_label, res.FnGetAtt('label')) self.assertEqual(expect_cidr, res.FnGetAtt('cidr')) def test_create_bad_cidr(self, mock_client): prop = self._template['resources']['cnw']['properties'] prop['cidr'] = "bad cidr" self._parse_stack() exc = self.assertRaises(exception.StackValidationFailed, self.stack.validate) self.assertIn("Invalid net cidr", six.text_type(exc)) # reset property prop['cidr'] = "172.16.0.0/24" def test_check(self, mock_client): self._setup_stack(mock_client) res = self.stack['cnw'] scheduler.TaskRunner(res.check)() self.assertEqual((res.CHECK, res.COMPLETE), res.state) self.fake_cnw.networks = [] exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.check)) self.assertEqual((res.CHECK, res.FAILED), res.state) self.assertIn('No network', str(exc)) def test_delete(self, mock_client): self._setup_stack(mock_client) res = self.stack['cnw'] res_id = res.FnGetRefId() scheduler.TaskRunner(res.delete)() self.assertEqual((res.DELETE, res.COMPLETE), res.state) exc = self.assertRaises(NotFound, self.fake_cnw.get, res_id) self.assertIn(res_id, six.text_type(exc)) def test_delete_no_network_created(self, mock_client): self.fake_cnw = FakeClientRaiseException() mock_client.return_value = self.fake_cnw self._parse_stack() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.FAILED), self.stack.state) res = self.stack['cnw'] self.assertEqual((res.CREATE, res.FAILED), res.state) scheduler.TaskRunner(res.delete)() self.assertEqual((res.DELETE, res.COMPLETE), res.state) def test_delete_in_use(self, mock_client): self._setup_stack(mock_client) res = self.stack['cnw'] fake_network = res.network() fake_network.delete = mock.Mock() fake_network.delete.side_effect = [cloudnetworks.NetworkInUse(), True] mock_client.return_value = fake_network fake_network.get = mock.Mock() fake_network.get.side_effect = [cloudnetworks.NotFound()] scheduler.TaskRunner(res.delete)() self.assertEqual((res.DELETE, res.COMPLETE), res.state) def test_delete_not_complete(self, mock_client): self._setup_stack(mock_client) res = self.stack['cnw'] mock_client.get = mock.Mock() task = res.handle_delete() self.assertFalse(res.check_delete_complete(task)) def test_delete_not_found(self, mock_client): self._setup_stack(mock_client) self.fake_cnw.networks = [] res = self.stack['cnw'] scheduler.TaskRunner(res.delete)() self.assertEqual((res.DELETE, res.COMPLETE), res.state) heat-10.0.0/contrib/rackspace/rackspace/tests/test_cloud_loadbalancer.py0000666000175100017510000025565213245511554026463 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import json import uuid import mock import mox import six from heat.common import exception from heat.common import template_format from heat.engine import resource from heat.engine import rsrc_defn from heat.engine import scheduler from heat.tests import common from heat.tests import utils from ..resources import cloud_loadbalancer as lb # noqa # The following fakes are for pyrax cert = """\n-----BEGIN CERTIFICATE----- MIIFBjCCAu4CCQDWdcR5LY/+/jANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0 cyBQdHkgTHRkMB4XDTE0MTAxNjE3MDYxNVoXDTE1MTAxNjE3MDYxNVowRTELMAkG A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0 IFdpZGdpdHMgUHR5IEx0ZDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB AMm5NcP0tMKHblT6Ud1k8TxZ9/8uOHwUNPbvFsvSyCupj0J0vGCTjbuC2I5T/CXR tnLEIt/EarlNAqcjbDCWtSyEKs3zDmmkreoIDEa8pyAQ2ycsCXGMxDN97F3/wlLZ agUNM0FwGHLZWBg62bM6l+bpTUcX0PqSyv/aVMhJ8EPDX0Dx1RYsVwUzIe/HWC7x vCmtDApAp1Fwq7AwlRaKU17sGwPWJ8+I8PyouBdqNuslHm7LQ0XvBA5DfkQA6feB ZeJIyOtctM9WFWQI5fKOsyt5P306B3Zztw9VZLAmZ8qHex+R1WY1zXxDAwKEQz/X 8bRqMA/VU8OxJcK0AmY/1v/TFmAlRh2XBCIc+5UGtCcftWvZJAsKur8Hg5pPluGv ptyqSgSsSKtOVWkyTANP1LyOkpBA8Kmkeo2CKXu1SCFypY5Q6E+Fy8Y8RaHJPvzR NHcm1tkBvHOKyRso6FjvxuJEyIC9EyUK010nwQm7Qui11VgCSHBoaKVvkIbFfQdK aCes0oQO5dqY0+fC/IFDhrxlvSd2Wk7KjuNjNu9kVN9Ama2pRTxhYKaN+GsHfoL7 ra6G9HjbUVULAdjCko3zOKEUzFLLf1VZYk7hDhyv9kovk0b8sr5WowxW7+9Wy0NK WL5f2QgVCcoHw9bGhyuYQCdBfztNmKOWe9pGj6bQAx4pAgMBAAEwDQYJKoZIhvcN AQEFBQADggIBALFSj3G2TEL/UWtNcPeY2fbxSGBrboFx3ur8+zTkdZzvfC8H9/UK w0aRH0rK4+lKYDqF6A9bUHP17DaJm1lF9In38VVMOuur0ehUIn1S2U3OvlDLN68S p5D4wGKMcUfUQ6pzhSKJCMvGX561TKHCc5fZhPruy75Xq2DcwJENE189foKLFvJs ca4sIARqP6v1vfARcfH5leSsdIq8hy6VfL0BRATXfNHZh4SNbyDJYYTxrEUPHYXW pzW6TziZXYNMG2ZRdHF/mDJuFzw2EklOrPC9MySCZv2i9swnqyuwNYh/SAMhodTv ZDGy4nbjWNe5BflTMBceh45VpyTcnQulFhZQFwP79fK10BoDrOc1mEefhIqT+fPI LJepLOf7CSXtYBcWbmMCLHNh+PrlCiA1QMTyd/AC1vvoiyCbs3M419XbXcBSDEh8 tACplmhf6z1vDkElWiDr8y0kujJ/Gie24iLTun6oHG+f+o6bbQ9w196T0olLcGx0 oAYL0Olqli6cWHhraVAzZ5t5PH4X9TiESuQ+PMjqGImCIUscXY4objdnB5dfPHoz eF5whPl36/GK8HUixCibkCyqEOBBuNqhOz7nVLM0eg5L+TE5coizEBagxVCovYSj fQ9zkIgaC5oeH6L0C1FFG1vRNSWokheBk14ztVoJCJyFr6p0/6pD7SeR -----END CERTIFICATE-----\n""" private_key = """\n-----BEGIN PRIVATE KEY----- MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDJuTXD9LTCh25U +lHdZPE8Wff/Ljh8FDT27xbL0sgrqY9CdLxgk427gtiOU/wl0bZyxCLfxGq5TQKn I2wwlrUshCrN8w5ppK3qCAxGvKcgENsnLAlxjMQzfexd/8JS2WoFDTNBcBhy2VgY OtmzOpfm6U1HF9D6ksr/2lTISfBDw19A8dUWLFcFMyHvx1gu8bwprQwKQKdRcKuw MJUWilNe7BsD1ifPiPD8qLgXajbrJR5uy0NF7wQOQ35EAOn3gWXiSMjrXLTPVhVk COXyjrMreT99Ogd2c7cPVWSwJmfKh3sfkdVmNc18QwMChEM/1/G0ajAP1VPDsSXC tAJmP9b/0xZgJUYdlwQiHPuVBrQnH7Vr2SQLCrq/B4OaT5bhr6bcqkoErEirTlVp MkwDT9S8jpKQQPCppHqNgil7tUghcqWOUOhPhcvGPEWhyT780TR3JtbZAbxziskb KOhY78biRMiAvRMlCtNdJ8EJu0LotdVYAkhwaGilb5CGxX0HSmgnrNKEDuXamNPn wvyBQ4a8Zb0ndlpOyo7jYzbvZFTfQJmtqUU8YWCmjfhrB36C+62uhvR421FVCwHY wpKN8zihFMxSy39VWWJO4Q4cr/ZKL5NG/LK+VqMMVu/vVstDSli+X9kIFQnKB8PW xocrmEAnQX87TZijlnvaRo+m0AMeKQIDAQABAoICAA8DuBrDxgiMqAuvLhS6hLIn SCw4NoAVyPNwTFQTdk65qi4aHkNZ+DyyuoetfKEcAOZ97tKU/hSYxM/H9S+QqB+O HtmBc9stJLy8qJ1DQXVDi+xYfMN05M2oW8WLWd1szVVe7Ce8vjUeNE5pYvbSL6hC STw3a5ibAH0WtSTLTBTfH+HnniKuXjPG4InGXqvv1j+L38+LjGilaEIO+6nX1ejE ziX09LWfzcAglsM3ZqsN8jvw6Sr1ZWniYC2Tm9aOTRUQsdPC7LpZ//GYL/Vj5bYg qjcZ8KBCcKe1hW8PDL6oYuOwqR+YdZkAK+MuEQtZeWYiWT10dW2la9gYKe2OZuQ1 7q3zZ6zLP+XP+0N7DRMTTuk2gurBVX7VldzIzvjmW8X+8Q5QO+EAqKr2yordK3S1 uYcKmyL4Nd6rSFjRo0zSqHMNOyKt3b1r3m/eR2W623rT5uTjgNYpiwCNxnxmcjpK Sq7JzZKz9NLbEKQWsP9gQ3G6pp3XfLtoOHEDkSKMmQxd8mzK6Ja/9iC+JGqRTJN+ STe1vL9L2DC7GnjOH1h2TwLoLtQWSGebf/GBxju0e5pAL0UYWBNjAwcpOoRU9J5J y9E7sNbbXTmK2rg3B/5VKGQckBWfurg7CjAmHGgz9xxceJQLKvT1O5zHZc+v4TVB XDZjtz8L2k3wFLDynDY5AoIBAQDm2fFgx4vk+gRFXPoLNN34Jw2fT+xuwD/H7K0e 0Cas0NfyNil/Kbp+rhMHuVXTt86BIY+z8GO4wwn+YdDgihBwobAh2G9T/P6wNm+Q NcIeRioml8V/CP7lOQONQJ6sLTRYnNLfB96uMFe+13DO/PjFybee5VflfBUrJK1M DqRLwm9wEIf5p0CWYI/ZJaDNN71B09BB/jdT/e7Ro1hXHlq3W4tKqRDPfuUqwy3H ocYQ1SUk3oFdSiYFd6PijNkfTnrtyToa0xUL9uGL+De1LfgV+uvqkOduQqnpm/5+ XQC1qbTUjq+4WEsuPjYf2E0WAVFGzwzWcdb0LnMIUJHwPvpLAoIBAQDfsvCZlcFM nGBk1zUnV3+21CPK+5+X3zLHr/4otQHlGMFL6ZiQManvKMX6a/cT3rG+LvECcXGD jSsTu7JIt9l8VTpbPaS76htTmQYaAZERitBx1C8zDMuI2O4bjFLUGUX73RyTZdRm G68IX+7Q7SL8zr/fHjcnk+3yj0L1soAVPC7lY3se7vQ/SCre97E+noP5yOhrpnRt dij7NYy79xcvUZfc/z0//Ia4JSCcIvv2HO7JZIPzUCVO4sjbUOGsgR9pwwQkwYeP b5P0MVaPgFnOgo/rz6Uqe+LpeY83SUwc2q8W8bskzTLZEnwSV5bxCY+gIn9KCZSG 8QxuftgIiQDbAoIBAQDQ2oTC5kXulzOd/YxK7z2S8OImLAzf9ha+LaZCplcXKqr0 e4P3hC0xxxN4fXjk3vp5YX+9b9MIqYw1FRIA02gkPmQ3erTd65oQmm88rSY+dYRU /iKz19OkVnycIsZrR0qAkQFGvrv8I8h+5DMvUTdQ2jrCCwQGnsgYDEqs8OI7mGFx pcMfXu3UHvCFqMFeaPtUvuk/i1tLJgYWrA2UY+X21V+j4GlREKEMmyCj5/xl5jCA tr2bRSY49BDVOlCFPl+BGfjzo9z6whU0qRDdXgWA/U7LHOYEn1NSAsuwTzwBHtR3 KdBYm6kI4Ufeb7buHasGwPQAX2X17MAt2ZbvIEsZAoIBAQC4g5dzh5PGhmH4K48b YU/l1TukzUIJekAfd+ozV4I1nuKppAeEQILD0yTh9zX4vMJtdbiz5DDWapWylCpt UsBgjsgwxDriCSr7HIhs4QfwqUhf67325MHpoc1dCbS0YBhatDpC1kaI5qLMTJzm 1gL69epLtleWHK2zWjnIAbEmUtr3uMOwczciD3vVKAeZ+BQx72bOjKESPNl2w+fO jvQfwrR5xEqYQco5j95DC5Q6oAjSM0enZV8wn10/kYpjyKnJieMcEkmnpUgrrpqQ iTUKYqUlw8OftEopfGwGFT5junmbek57/4nGhTmzw22sac9/LZVC034ghClV5uh4 udDrAoIBAQCJHfBPJmJMT/WtSATTceVDgZiyezWNgH2yLJMqDP6sEuImnLAg2L9M Yc6LqMcHLj7CyXfy2AEAuYTZwXFSRmVKl6Ycad7sS/hIL1ykvDveRU9VNImexDBq AJR4GKr6jbRZnBztnRYZTsGA+TcrFc6SwdSPXgz7JQT9uw+JkhLi59m141XBdeRc NQ/LFgOaxjvRUID81izQaYEyADId7asy+2QVazMDafuALJ23WSUMSXajCXaC6/7N 53RWrOAb+kFRgjuHM8pQkpgnY/Ds0MZxpakFw3Y7PAEL99xyYdR+rE3JOMjPlgr0 LpTt0Xs1OFZxaNpolW5Qis4os7UmmIRV -----END PRIVATE KEY-----\n""" class FakeException(Exception): pass class FakeClient(object): user_agent = "Fake" USER_AGENT = "Fake" class FakeManager(object): api = FakeClient() def list(self): pass def get(self, item): pass def delete(self, item): pass def create(self, *args, **kwargs): pass def find(self, *args, **kwargs): pass def action(self, item, action_type, body=None): pass class FakeLoadBalancerManager(object): def __init__(self, api=None, *args, **kwargs): pass def set_content_caching(self, *args, **kwargs): pass class FakeNode(object): def __init__(self, address=None, port=None, condition=None, weight=None, status=None, parent=None, type=None, id=None): if not (address and port): # This mimics the check that pyrax does on Node instantiation raise TypeError("You must include an address and " "a port when creating a node.") self.address = address self.port = port self.condition = condition self.weight = weight self.status = status self.parent = parent self.type = type self.id = id def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) def update(self): pass def delete(self): pass class FakeVirtualIP(object): def __init__(self, address=None, port=None, condition=None, ipVersion=None, type=None, id=None): self.address = address self.port = port self.condition = condition self.ipVersion = ipVersion self.type = type self.id = id self.ip_version = ipVersion def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return not self.__eq__(other) class FakeLoadBalancerClient(object): def __init__(self, *args, **kwargs): self.Node = FakeNode self.VirtualIP = FakeVirtualIP pass def get(self, *args, **kwargs): pass def create(self, *args, **kwargs): pass class FakeLoadBalancer(object): def __init__(self, name=None, info=None, *args, **kwargs): name = name or uuid.uuid4() info = info or {"fake": "fake"} self.id = uuid.uuid4() self.manager = FakeLoadBalancerManager() self.Node = FakeNode self.VirtualIP = FakeVirtualIP self.nodes = [] self.algorithm = "ROUND_ROBIN" self.session_persistence = "HTTP_COOKIE" self.connection_logging = False self.timeout = None self.httpsRedirect = False self.protocol = None self.port = None self.name = None self.halfClosed = None self.content_caching = False def get(self, *args, **kwargs): pass def add_nodes(self, *args, **kwargs): pass def add_ssl_termination(self, *args, **kwargs): pass def set_error_page(self, *args, **kwargs): pass def clear_error_page(self, *args, **kwargs): pass def add_access_list(self, *args, **kwargs): pass def update(self, *args, **kwargs): pass def add_health_monitor(self, *args, **kwargs): pass def delete_health_monitor(self, *args, **kwargs): pass def delete_ssl_termination(self, *args, **kwargs): pass def set_metadata(self, *args, **kwargs): pass def delete_metadata(self, *args, **kwargs): pass def add_connection_throttle(self, *args, **kwargs): pass def delete_connection_throttle(self, *args, **kwargs): pass def delete(self, *args, **kwargs): pass def get_health_monitor(self, *args, **kwargs): return {} def get_metadata(self, *args, **kwargs): return {} def get_error_page(self, *args, **kwargs): pass def get_connection_throttle(self, *args, **kwargs): pass def get_ssl_termination(self, *args, **kwargs): pass def get_access_list(self, *args, **kwargs): pass class LoadBalancerWithFakeClient(lb.CloudLoadBalancer): def cloud_lb(self): return FakeLoadBalancerClient() def override_resource(): return { 'Rackspace::Cloud::LoadBalancer': LoadBalancerWithFakeClient } class LoadBalancerTest(common.HeatTestCase): def setUp(self): super(LoadBalancerTest, self).setUp() self.lb_props = { "name": "test-clb", "nodes": [{"addresses": ["166.78.103.141"], "port": 80, "condition": "ENABLED"}], "protocol": "HTTP", "port": 80, "virtualIps": [ {"type": "PUBLIC", "ipVersion": "IPV6"}], "algorithm": 'LEAST_CONNECTIONS', "connectionThrottle": {'maxConnectionRate': 1000}, 'timeout': 110, 'contentCaching': 'DISABLED' } self.lb_template = { "AWSTemplateFormatVersion": "2010-09-09", "Description": "fawef", "Resources": { self._get_lb_resource_name(): { "Type": "Rackspace::Cloud::LoadBalancer", "Properties": self.lb_props, } } } self.lb_name = 'test-clb' self.expected_body = { "nodes": [FakeNode(address=u"166.78.103.141", port=80, condition=u"ENABLED", type=u"PRIMARY", weight=1)], "protocol": u'HTTP', "port": 80, "virtual_ips": [FakeVirtualIP(type=u"PUBLIC", ipVersion=u"IPV6")], "algorithm": u'LEAST_CONNECTIONS', "connectionThrottle": {'maxConnectionRate': 1000, 'maxConnections': None, 'rateInterval': None, 'minConnections': None}, "connectionLogging": None, "halfClosed": None, "healthMonitor": None, "metadata": None, "sessionPersistence": None, "timeout": 110, "httpsRedirect": False } lb.resource_mapping = override_resource resource._register_class("Rackspace::Cloud::LoadBalancer", LoadBalancerWithFakeClient) def _get_lb_resource_name(self): return "lb-" + str(uuid.uuid4()) def __getattribute__(self, name): if name == 'expected_body' or name == 'lb_template': return copy.deepcopy(super(LoadBalancerTest, self) .__getattribute__(name)) return super(LoadBalancerTest, self).__getattribute__(name) def _mock_create(self, tmpl, stack, resource_name, lb_name, lb_body): resource_defns = tmpl.resource_definitions(stack) rsrc = LoadBalancerWithFakeClient(resource_name, resource_defns[resource_name], stack) fake_lb = FakeLoadBalancer(name=lb_name) fake_lb.status = 'ACTIVE' fake_lb.resource_id = 1234 self.m.StubOutWithMock(rsrc.clb, 'create') rsrc.clb.create(lb_name, **lb_body).AndReturn(fake_lb) self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).MultipleTimes().AndReturn( fake_lb) return (rsrc, fake_lb) def _get_first_resource_name(self, templ): return next(k for k in templ['Resources']) def _mock_loadbalancer(self, lb_template, expected_name, expected_body): t = template_format.parse(json.dumps(lb_template)) self.stack = utils.parse_stack(t, stack_name=utils.random_name()) rsrc, fake_lb = self._mock_create(self.stack.t, self.stack, self. _get_first_resource_name( lb_template), expected_name, expected_body) return (rsrc, fake_lb) def _set_template(self, templ, **kwargs): for k, v in six.iteritems(kwargs): templ['Resources'][self._get_first_resource_name(templ)][ 'Properties'][k] = v return templ def _set_expected(self, expected, **kwargs): for k, v in six.iteritems(kwargs): expected[k] = v return expected def test_process_node(self): nodes = [{'addresses': ['1234'], 'port': 80, 'enabled': True}, {'addresses': ['4567', '8901', '8903'], 'port': 80, 'enabled': True}, {'addresses': [], 'port': 80, 'enabled': True}] rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) expected_nodes = [{'address': '1234', 'port': 80, 'enabled': True}, {'address': '4567', 'port': 80, 'enabled': True}, {'address': '8901', 'port': 80, 'enabled': True}, {'address': '8903', 'port': 80, 'enabled': True}] self.assertEqual(expected_nodes, list(rsrc._process_nodes(nodes))) def test_nodeless(self): """It's possible to create a LoadBalancer resource with no nodes.""" template = self._set_template(self.lb_template, nodes=[]) expected_body = copy.deepcopy(self.expected_body) expected_body['nodes'] = [] rsrc, fake_lb = self._mock_loadbalancer( template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_alter_properties(self): # test alter properties functions template = self._set_template(self.lb_template, sessionPersistence='HTTP_COOKIE', connectionLogging=True, metadata={'yolo': 'heeyyy_gurl'}) expected = self._set_expected(self.expected_body, sessionPersistence={ 'persistenceType': 'HTTP_COOKIE'}, connectionLogging={'enabled': True}, metadata=[ {'key': 'yolo', 'value': 'heeyyy_gurl'}]) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_validate_vip(self): snippet = { "nodes": [], "protocol": 'HTTP', "port": 80, "halfClosed": None, "algorithm": u'LEAST_CONNECTIONS', "virtualIps": [{"id": "1234"}] } stack = mock.Mock() stack.db_resource_get.return_value = None stack.has_cache_data.return_value = False # happy path resdef = rsrc_defn.ResourceDefinition("testvip", lb.CloudLoadBalancer, properties=snippet) rsrc = lb.CloudLoadBalancer("testvip", resdef, stack) self.assertIsNone(rsrc.validate()) # make sure the vip id prop is exclusive snippet["virtualIps"][0]["type"] = "PUBLIC" exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn("Cannot specify type or version", str(exc)) # make sure you have to specify type and version if no id snippet["virtualIps"] = [{}] exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn("Must specify VIP type and version", str(exc)) def test_validate_half_closed(self): # test failure (invalid protocol) template = self._set_template(self.lb_template, halfClosed=True) expected = self._set_expected(self.expected_body, halfClosed=True) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn('The halfClosed property is only available for the TCP' ' or TCP_CLIENT_FIRST protocols', str(exc)) # test TCP protocol template = self._set_template(template, protocol='TCP') expected = self._set_expected(expected, protocol='TCP') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.assertIsNone(rsrc.validate()) # test TCP_CLIENT_FIRST protocol template = self._set_template(template, protocol='TCP_CLIENT_FIRST') expected = self._set_expected(expected, protocol='TCP_CLIENT_FIRST') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.assertIsNone(rsrc.validate()) def test_validate_health_monitor(self): # test connect success health_monitor = { 'type': 'CONNECT', 'attemptsBeforeDeactivation': 1, 'delay': 1, 'timeout': 1 } template = self._set_template(self.lb_template, healthMonitor=health_monitor) expected = self._set_expected(self.expected_body, healthMonitor=health_monitor) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.assertIsNone(rsrc.validate()) # test connect failure # bodyRegex is only valid for type 'HTTP(S)' health_monitor['bodyRegex'] = 'dfawefawe' template = self._set_template(template, healthMonitor=health_monitor) expected = self._set_expected(expected, healthMonitor=health_monitor) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn('Unknown Property bodyRegex', str(exc)) # test http fields health_monitor['type'] = 'HTTP' health_monitor['bodyRegex'] = 'bodyRegex' health_monitor['statusRegex'] = 'statusRegex' health_monitor['hostHeader'] = 'hostHeader' health_monitor['path'] = 'path' template = self._set_template(template, healthMonitor=health_monitor) expected = self._set_expected(expected, healthMonitor=health_monitor) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.assertIsNone(rsrc.validate()) def test_validate_ssl_termination(self): ssl_termination = { 'privatekey': 'ewfawe', 'intermediateCertificate': 'fwaefawe', 'secureTrafficOnly': True } # test ssl termination enabled without required fields failure template = self._set_template(self.lb_template, sslTermination=ssl_termination) expected = self._set_expected(self.expected_body, sslTermination=ssl_termination) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) exc = self.assertRaises(exception.StackValidationFailed, rsrc.validate) self.assertIn("Property certificate not assigned", six.text_type(exc)) ssl_termination['certificate'] = 'dfaewfwef' template = self._set_template(template, sslTermination=ssl_termination) expected = self._set_expected(expected, sslTermination=ssl_termination) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.assertIsNone(rsrc.validate()) def test_ssl_termination_unstripped_certificates(self): ssl_termination_template = { 'securePort': 443, 'privatekey': 'afwefawe', 'certificate': ' \nfawefwea\n ', 'intermediateCertificate': "\n\nintermediate_certificate\n", 'secureTrafficOnly': False } ssl_termination_api = copy.deepcopy(ssl_termination_template) template = self._set_template(self.lb_template, sslTermination=ssl_termination_template) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': 'fawefwea', 'intermediateCertificate': "intermediate_certificate", 'secureTrafficOnly': False, 'enabled': True, }) self.m.StubOutWithMock(fake_lb, 'add_ssl_termination') fake_lb.add_ssl_termination(**ssl_termination_api) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_ssl_termination_intermediateCertificate_None(self): ssl_termination_template = { 'securePort': 443, 'privatekey': 'afwefawe', 'certificate': ' \nfawefwea\n ', 'intermediateCertificate': None, 'secureTrafficOnly': False } template = self._set_template(self.lb_template, sslTermination=ssl_termination_template) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': 'fawefwea', 'secureTrafficOnly': False, 'enabled': True, }) self.m.StubOutWithMock(fake_lb, 'add_ssl_termination') add_ssl_termination_args = { 'securePort': 443, 'privatekey': 'afwefawe', 'certificate': ' \nfawefwea\n ', 'intermediateCertificate': '', 'secureTrafficOnly': False } fake_lb.add_ssl_termination(**add_ssl_termination_args) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_post_creation_access_list(self): access_list = [{"address": '192.168.1.1/0', 'type': 'ALLOW'}, {'address': '172.165.3.43', 'type': 'DENY'}] api_access_list = [{"address": '192.168.1.1/0', 'id': 1234, 'type': 'ALLOW'}, {'address': '172.165.3.43', 'id': 3422, 'type': 'DENY'}] template = self._set_template(self.lb_template, accessList=access_list) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_access_list') fake_lb.get_access_list().AndReturn([]) fake_lb.get_access_list().AndReturn(api_access_list) self.m.StubOutWithMock(fake_lb, 'add_access_list') fake_lb.add_access_list(access_list) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_ref_id(self): """The Reference ID of the resource is the resource ID.""" template = self._set_template(self.lb_template) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() self.assertEqual(rsrc.resource_id, rsrc.FnGetRefId()) def test_post_creation_error_page(self): error_page = "REALLY BIG ERROR" template = self._set_template(self.lb_template, errorPage=error_page) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_error_page') fake_lb.get_error_page().AndReturn({u'errorpage': {u'content': u''}}) fake_lb.get_error_page().AndReturn( {u'errorpage': {u'content': error_page}}) self.m.StubOutWithMock(fake_lb, 'set_error_page') fake_lb.set_error_page(error_page) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_post_creation_ssl_termination(self): ssl_termination_template = { 'securePort': 443, 'privatekey': 'afwefawe', 'certificate': 'fawefwea', 'intermediateCertificate': "intermediate_certificate", 'secureTrafficOnly': False } ssl_termination_api = copy.deepcopy(ssl_termination_template) template = self._set_template(self.lb_template, sslTermination=ssl_termination_template) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': 'fawefwea', 'intermediateCertificate': "intermediate_certificate", 'secureTrafficOnly': False, 'enabled': True, }) self.m.StubOutWithMock(fake_lb, 'add_ssl_termination') fake_lb.add_ssl_termination(**ssl_termination_api) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_post_creation_content_caching(self): template = self._set_template(self.lb_template, contentCaching='ENABLED') rsrc = self._mock_loadbalancer(template, self.lb_name, self.expected_body)[0] self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_check(self): stack = mock.Mock() stack.db_resource_get.return_value = None stack.has_cache_data.return_value = False resdef = mock.Mock(spec=rsrc_defn.ResourceDefinition) loadbalancer = lb.CloudLoadBalancer("test", resdef, stack) loadbalancer._add_event = mock.Mock() mock_cloud_lb = mock.Mock() mock_get = mock.Mock(return_value=mock_cloud_lb) loadbalancer.clb.get = mock_get mock_cloud_lb.status = 'ACTIVE' scheduler.TaskRunner(loadbalancer.check)() self.assertEqual('CHECK', loadbalancer.action) self.assertEqual('COMPLETE', loadbalancer.status) mock_cloud_lb.status = 'FOOBAR' exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(loadbalancer.check)) self.assertEqual('CHECK', loadbalancer.action) self.assertEqual('FAILED', loadbalancer.status) self.assertIn('FOOBAR', str(exc)) mock_get.side_effect = lb.NotFound('boom') exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(loadbalancer.check)) self.assertEqual('CHECK', loadbalancer.action) self.assertEqual('FAILED', loadbalancer.status) self.assertIn('boom', str(exc)) def test_update_add_node_by_address(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) fake_lb.nodes = self.expected_body['nodes'] self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) expected_ip = '172.168.1.4' props['nodes'] = [ {"addresses": ["166.78.103.141"], "port": 80, "condition": "ENABLED", "type": "PRIMARY", "weight": 1}, {"addresses": [expected_ip], "port": 80, "condition": "ENABLED", "type": "PRIMARY", "weight": 1}] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.nodes = [ FakeNode(address=u"172.168.1.4", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"166.78.103.141", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), ] rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'add_nodes') fake_lb.add_nodes([ fake_lb.Node(address=expected_ip, port=80, condition='ENABLED', type="PRIMARY", weight=1)]) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_resolve_attr_noid(self): stack = mock.Mock() stack.db_resource_get.return_value = None stack.has_cache_data.return_value = False resdef = mock.Mock(spec=rsrc_defn.ResourceDefinition) lbres = lb.CloudLoadBalancer("test", resdef, stack) self.assertIsNone(lbres._resolve_attribute("PublicIp")) def test_resolve_attr_virtualips(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) fake_lb.virtual_ips = [FakeVirtualIP(address='1.2.3.4', type='PUBLIC', ipVersion="IPv6", id='test-id')] self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() expected = [{ 'ip_version': 'IPv6', 'type': 'PUBLIC', 'id': 'test-id', 'address': '1.2.3.4'}] self.m.ReplayAll() self.assertEqual(expected, rsrc._resolve_attribute("virtualIps")) self.m.VerifyAll() def test_update_nodes_immutable(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) current_nodes = [ FakeNode(address=u"1.1.1.1", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"2.2.2.2", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"3.3.3.3", port=80, condition=u"ENABLED", type="PRIMARY", weight=1) ] fake_lb.nodes = current_nodes fake_lb.tracker = "fake_lb" self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) expected_ip = '4.4.4.4' props['nodes'] = [ {"addresses": ["1.1.1.1"], "port": 80, "condition": "ENABLED", "type": "PRIMARY", "weight": 1}, {"addresses": ["2.2.2.2"], "port": 80, "condition": "DISABLED", "type": "PRIMARY", "weight": 1}, {"addresses": [expected_ip], "port": 80, "condition": "ENABLED", "type": "PRIMARY", "weight": 1} ] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.status = "PENDING_UPDATE" fake_lb1.tracker = "fake_lb1" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) # ACTIVE # Add node `expected_ip` rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) # PENDING_UPDATE fake_lb2 = copy.deepcopy(fake_lb1) fake_lb2.status = "ACTIVE" fake_lb2.nodes = [ FakeNode(address=u"1.1.1.1", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"2.2.2.2", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"3.3.3.3", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"4.4.4.4", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), ] fake_lb2.tracker = "fake_lb2" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) # ACTIVE # Delete node 3.3.3.3 rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) # PENDING_UPDATE fake_lb3 = copy.deepcopy(fake_lb2) fake_lb3.status = "ACTIVE" fake_lb3.nodes = [ FakeNode(address=u"1.1.1.1", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"2.2.2.2", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"4.4.4.4", port=80, condition=u"ENABLED", type="PRIMARY", weight=1) ] fake_lb3.tracker = "fake_lb3" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb3) # ACTIVE # Update node 2.2.2.2 rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) # PENDING_UPDATE fake_lb4 = copy.deepcopy(fake_lb3) fake_lb4.status = "ACTIVE" fake_lb4.nodes = [ FakeNode(address=u"1.1.1.1", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"2.2.2.2", port=80, condition=u"DISABLED", type="PRIMARY", weight=1), FakeNode(address=u"4.4.4.4", port=80, condition=u"ENABLED", type="PRIMARY", weight=1) ] fake_lb4.tracker = "fake_lb4" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb4) # ACTIVE self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_pending_update_status(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['name'] = "updated_name" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.name = "updated_name" fake_lb1.status = "PENDING_UPDATE" # lb is immutable rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.name = "updated_name" fake_lb2.status = "ACTIVE" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_immutable_exception(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['name'] = "updated_name" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) # initial iteration rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) # immutable fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.name = "updated_name" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) # after update self.m.StubOutWithMock(fake_lb, 'update') msg = ("Load Balancer '%s' has a status of 'PENDING_UPDATE' and " "is considered immutable." % rsrc.resource_id) fake_lb.update(name="updated_name").AndRaise(Exception(msg)) fake_lb.update(name="updated_name").AndReturn(None) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_create_immutable_exception(self): access_list = [{"address": '192.168.1.1/0', 'type': 'ALLOW'}, {'address': '172.165.3.43', 'type': 'DENY'}] template = self._set_template(self.lb_template, accessList=access_list) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_access_list') fake_lb.get_access_list().AndReturn({}) fake_lb.get_access_list().AndReturn({}) fake_lb.get_access_list().AndReturn(access_list) self.m.StubOutWithMock(fake_lb, 'add_access_list') msg = ("Load Balancer '%s' has a status of 'PENDING_UPDATE' and " "is considered immutable." % rsrc.resource_id) fake_lb.add_access_list(access_list).AndRaise(Exception(msg)) fake_lb.add_access_list(access_list) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() def test_update_lb_name(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['name'] = "updated_name" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.name = "updated_name" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(name="updated_name") self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_multiple(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['name'] = "updated_name" props['algorithm'] = "RANDOM" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.name = "updated_name" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.algorithm = "RANDOM" fake_lb2.name = "updated_name" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(name="updated_name", algorithm="RANDOM") self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_algorithm(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['algorithm'] = "RANDOM" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.algorithm = "ROUND_ROBIN" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb1, 'update') fake_lb1.update(algorithm="RANDOM") fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.algorithm = "RANDOM" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_protocol(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['protocol'] = "IMAPS" update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.protocol = "IMAPS" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(protocol="IMAPS") self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_redirect(self): template = self._set_template( self.lb_template, protocol="HTTPS") expected = self._set_expected( self.expected_body, protocol="HTTPS") rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(template['Resources'][rsrc.name]['Properties']) props['httpsRedirect'] = True update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.httpsRedirect = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(httpsRedirect=True) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_lb_redirect_https(self): template = self._set_template( self.lb_template, protocol="HTTPS", httpsRedirect=True) expected = self._set_expected( self.expected_body, protocol="HTTPS", httpsRedirect=True) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_lb_redirect_HTTP_with_SSL_term(self): ssl_termination_template = { 'privatekey': private_key, 'intermediateCertificate': 'fwaefawe', 'secureTrafficOnly': True, 'securePort': 443, 'certificate': cert } ssl_termination_api = copy.deepcopy(ssl_termination_template) ssl_termination_api['enabled'] = True del ssl_termination_api['privatekey'] template = self._set_template( self.lb_template, sslTermination=ssl_termination_template, protocol="HTTP", httpsRedirect=True) expected = self._set_expected( self.expected_body, protocol="HTTP", httpsRedirect=False) rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'create') rsrc.clb.create(self.lb_name, **expected).AndReturn(fake_lb) self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.httpsRedirect = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) fake_lb.get_ssl_termination().AndReturn(ssl_termination_api) self.m.StubOutWithMock(fake_lb1, 'get_ssl_termination') fake_lb1.get_ssl_termination().AndReturn(ssl_termination_api) fake_lb1.get_ssl_termination().AndReturn(ssl_termination_api) fake_lb1.get_ssl_termination().AndReturn(ssl_termination_api) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) def test_update_lb_half_closed(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['halfClosed'] = True update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.halfClosed = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(halfClosed=True) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_port(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['port'] = 1234 update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.port = 1234 rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(port=1234) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_lb_timeout(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['timeout'] = 120 update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.timeout = 120 rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb, 'update') fake_lb.update(timeout=120) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_health_monitor_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['healthMonitor'] = { 'type': "HTTP", 'delay': 10, 'timeout': 10, 'attemptsBeforeDeactivation': 4, 'path': "/", 'statusRegex': "^[234][0-9][0-9]$", 'bodyRegex': ".* testing .*", 'hostHeader': "example.com"} update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'get_health_monitor') fake_lb.get_health_monitor().AndReturn({}) fake_lb.get_health_monitor().AndReturn( {'type': "HTTP", 'delay': 10, 'timeout': 10, 'attemptsBeforeDeactivation': 4, 'path': "/", 'statusRegex': "^[234][0-9][0-9]$", 'bodyRegex': ".* testing .*", 'hostHeader': "example.com"}) self.m.StubOutWithMock(fake_lb, 'add_health_monitor') fake_lb.add_health_monitor( attemptsBeforeDeactivation=4, bodyRegex='.* testing .*', delay=10, hostHeader='example.com', path='/', statusRegex='^[234][0-9][0-9]$', timeout=10, type='HTTP') self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_health_monitor_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) hm = {'type': "HTTP", 'delay': 10, 'timeout': 10, 'attemptsBeforeDeactivation': 4, 'path': "/", 'statusRegex': "^[234][0-9][0-9]$", 'bodyRegex': ".* testing .*", 'hostHeader': "example.com"} template['Resources'][lb_name]['Properties']['healthMonitor'] = hm expected_body = copy.deepcopy(self.expected_body) expected_body['healthMonitor'] = hm rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() update_template = rsrc.t.freeze(properties=self.lb_props) self.m.StubOutWithMock(fake_lb, 'get_health_monitor') fake_lb.get_health_monitor().AndReturn( {'type': "HTTP", 'delay': 10, 'timeout': 10, 'attemptsBeforeDeactivation': 4, 'path': "/", 'statusRegex': "^[234][0-9][0-9]$", 'bodyRegex': ".* testing .*", 'hostHeader': "example.com"}) fake_lb.get_health_monitor().AndReturn({}) self.m.StubOutWithMock(fake_lb, 'delete_health_monitor') fake_lb.delete_health_monitor() self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_session_persistence_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['sessionPersistence'] = 'SOURCE_IP' update_template = rsrc.t.freeze(properties=props) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('SOURCE_IP', fake_lb.session_persistence) self.m.VerifyAll() def test_update_session_persistence_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'sessionPersistence'] = "SOURCE_IP" expected_body = copy.deepcopy(self.expected_body) expected_body['sessionPersistence'] = {'persistenceType': "SOURCE_IP"} rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() update_template = rsrc.t.freeze(properties=self.lb_props) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertEqual('', fake_lb.session_persistence) self.m.VerifyAll() def test_update_ssl_termination_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['sslTermination'] = { 'securePort': 443, 'privatekey': private_key, 'certificate': cert, 'secureTrafficOnly': False, 'intermediateCertificate': '' } update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': cert, 'secureTrafficOnly': False, 'enabled': True}) self.m.StubOutWithMock(fake_lb, 'add_ssl_termination') fake_lb.add_ssl_termination( securePort=443, privatekey=private_key, certificate=cert, secureTrafficOnly=False, intermediateCertificate='') self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_ssl_termination_delete(self): template = copy.deepcopy(self.lb_template) ssl_termination_template = { 'securePort': 443, 'privatekey': private_key, 'certificate': cert, 'intermediateCertificate': '', 'secureTrafficOnly': False} ssl_termination_api = copy.deepcopy(ssl_termination_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties']['sslTermination'] = ( ssl_termination_template) # The SSL termination config is done post-creation, so no need # to modify self.expected_body rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({}) self.m.StubOutWithMock(fake_lb, 'add_ssl_termination') fake_lb.add_ssl_termination(**ssl_termination_api) fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': cert, 'secureTrafficOnly': False, 'enabled': True}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() update_template = rsrc.t.freeze(properties=self.lb_props) self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).MultipleTimes().AndReturn( fake_lb) self.m.StubOutWithMock(fake_lb, 'get_ssl_termination') fake_lb.get_ssl_termination().AndReturn({ 'securePort': 443, 'certificate': cert, 'secureTrafficOnly': False}) self.m.StubOutWithMock(fake_lb, 'delete_ssl_termination') fake_lb.delete_ssl_termination() fake_lb.get_ssl_termination().AndReturn({}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_metadata_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['metadata'] = {'a': 1, 'b': 2} update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'get_metadata') fake_lb.get_metadata().AndReturn({}) fake_lb.get_metadata().AndReturn({'a': 1, 'b': 2}) self.m.StubOutWithMock(fake_lb, 'set_metadata') fake_lb.set_metadata({'a': 1, 'b': 2}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_metadata_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties']['metadata'] = { 'a': 1, 'b': 2} expected_body = copy.deepcopy(self.expected_body) expected_body['metadata'] = mox.SameElementsAs( [{'key': 'a', 'value': 1}, {'key': 'b', 'value': 2}]) rsrc, fake_lb = self._mock_loadbalancer( template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() update_template = rsrc.t.freeze(properties=self.lb_props) self.m.StubOutWithMock(fake_lb, 'get_metadata') fake_lb.get_metadata().AndReturn({'a': 1, 'b': 2}) fake_lb.get_metadata().AndReturn({}) self.m.StubOutWithMock(fake_lb, 'delete_metadata') fake_lb.delete_metadata() self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_errorpage_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error_page = ( 'Service Unavailable

' 'Service Unavailable

The service is unavailable') props = copy.deepcopy(self.lb_props) props['errorPage'] = error_page update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'get_error_page') fake_lb.get_error_page().AndReturn( {'errorpage': {'content': 'foo'}}) fake_lb.get_error_page().AndReturn( {'errorpage': {'content': error_page}}) self.m.StubOutWithMock(fake_lb, 'set_error_page') fake_lb.set_error_page(error_page) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_errorpage_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) error_page = ( 'Service Unavailable

' 'Service Unavailable

The service is unavailable') template['Resources'][lb_name]['Properties']['errorPage'] = error_page # The error page config is done post-creation, so no need to # modify self.expected_body rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.StubOutWithMock(fake_lb, 'get_error_page') fake_lb.get_error_page().AndReturn({}) self.m.StubOutWithMock(fake_lb, 'set_error_page') fake_lb.set_error_page(error_page) fake_lb.get_error_page().AndReturn({'errorpage': {'content': error_page}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() update_template = rsrc.t.freeze(properties=self.lb_props) self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).MultipleTimes().AndReturn( fake_lb) self.m.StubOutWithMock(fake_lb, 'clear_error_page') fake_lb.clear_error_page() self.m.StubOutWithMock(fake_lb, 'get_error_page') fake_lb.get_error_page().AndReturn( {'errorpage': {'content': error_page}}) fake_lb.get_error_page().AndReturn({'errorpage': {'content': ""}}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_connection_logging_enable(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['connectionLogging'] = True update_template = rsrc.t.freeze(properties=props) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertTrue(fake_lb.connection_logging) self.m.VerifyAll() def test_update_connection_logging_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'connectionLogging'] = True expected_body = copy.deepcopy(self.expected_body) expected_body['connectionLogging'] = {'enabled': True} rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.connection_logging = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.connection_logging = False rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) update_template = rsrc.t.freeze(properties=self.lb_props) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertFalse(fake_lb.connection_logging) self.m.VerifyAll() def test_update_connection_logging_disable(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'connectionLogging'] = True expected_body = copy.deepcopy(self.expected_body) expected_body['connectionLogging'] = {'enabled': True} rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['connectionLogging'] = False update_template = rsrc.t.freeze(properties=props) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.assertFalse(fake_lb.connection_logging) self.m.VerifyAll() def test_update_connection_throttle_add(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['connectionThrottle'] = {'maxConnections': 1000} update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'add_connection_throttle') self.m.StubOutWithMock(fake_lb, 'get_connection_throttle') fake_lb.get_connection_throttle().AndReturn( {'maxConnectionRate': None, 'minConnections': None, 'rateInterval': None, 'maxConnections': 100}) fake_lb.add_connection_throttle( maxConnections=1000, maxConnectionRate=None, minConnections=None, rateInterval=None) fake_lb.get_connection_throttle().AndReturn( {'maxConnectionRate': None, 'minConnections': None, 'rateInterval': None, 'maxConnections': 1000}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_connection_throttle_delete(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'connectionThrottle'] = {'maxConnections': 1000} expected_body = copy.deepcopy(self.expected_body) expected_body['connectionThrottle'] = { 'maxConnections': 1000, 'maxConnectionRate': None, 'rateInterval': None, 'minConnections': None} rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) del props['connectionThrottle'] update_template = rsrc.t.freeze(properties=props) self.m.StubOutWithMock(fake_lb, 'get_connection_throttle') fake_lb.get_connection_throttle().AndReturn({ 'maxConnections': 1000, 'maxConnectionRate': None, 'rateInterval': None, 'minConnections': None}) self.m.StubOutWithMock(fake_lb, 'delete_connection_throttle') fake_lb.delete_connection_throttle() fake_lb.get_connection_throttle().AndReturn({}) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_content_caching_enable(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['contentCaching'] = 'ENABLED' update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.content_caching = False rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.content_caching = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_content_caching_deleted(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'contentCaching'] = 'ENABLED' # Enabling the content cache is done post-creation, so no need # to modify self.expected_body rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) del props['contentCaching'] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.content_caching = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.content_caching = False rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_content_caching_disable(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) template['Resources'][lb_name]['Properties'][ 'contentCaching'] = 'ENABLED' # Enabling the content cache is done post-creation, so no need # to modify self.expected_body rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['contentCaching'] = 'DISABLED' update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) fake_lb1.content_caching = True rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.content_caching = False rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete(self): template = self._set_template(self.lb_template, contentCaching='ENABLED') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.VerifyAll() self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) rsrc.clb.get(mox.IgnoreArg()).AndRaise(lb.NotFound('foo')) self.m.ReplayAll() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_immutable(self): template = self._set_template(self.lb_template, contentCaching='ENABLED') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) rsrc.clb.get(mox.IgnoreArg()).AndRaise(lb.NotFound('foo')) self.m.StubOutWithMock(fake_lb, 'delete') fake_lb.delete().AndRaise(Exception('immutable')) self.m.ReplayAll() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_delete_non_immutable_exc(self): template = self._set_template(self.lb_template, contentCaching='ENABLED') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb) self.m.StubOutWithMock(fake_lb, 'delete') fake_lb.delete().AndRaise(FakeException()) self.m.ReplayAll() exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) self.assertIn('FakeException', six.text_type(exc)) self.m.VerifyAll() def test_delete_states(self): template = self._set_template(self.lb_template, contentCaching='ENABLED') rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, self.expected_body) self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.m.UnsetStubs() fake_lb1 = copy.deepcopy(fake_lb) fake_lb2 = copy.deepcopy(fake_lb) fake_lb3 = copy.deepcopy(fake_lb) self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1.status = 'ACTIVE' rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) fake_lb2.status = 'PENDING_DELETE' rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) fake_lb3.status = 'DELETED' rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb3) self.m.ReplayAll() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_redir(self): mock_stack = mock.Mock() mock_stack.db_resource_get.return_value = None mock_stack.has_cache_data.return_value = False props = {'httpsRedirect': True, 'protocol': 'HTTPS', 'port': 443, 'nodes': [], 'virtualIps': [{'id': '1234'}]} mock_resdef = rsrc_defn.ResourceDefinition("test_lb", LoadBalancerWithFakeClient, properties=props) mock_lb = lb.CloudLoadBalancer("test", mock_resdef, mock_stack) self.assertIsNone(mock_lb.validate()) props['protocol'] = 'HTTP' props['sslTermination'] = { 'secureTrafficOnly': True, 'securePort': 443, 'privatekey': "bobloblaw", 'certificate': 'mycert' } mock_resdef = rsrc_defn.ResourceDefinition("test_lb_2", LoadBalancerWithFakeClient, properties=props) mock_lb = lb.CloudLoadBalancer("test_2", mock_resdef, mock_stack) self.assertIsNone(mock_lb.validate()) def test_invalid_redir_proto(self): mock_stack = mock.Mock() mock_stack.db_resource_get.return_value = None mock_stack.has_cache_data.return_value = False props = {'httpsRedirect': True, 'protocol': 'TCP', 'port': 1234, 'nodes': [], 'virtualIps': [{'id': '1234'}]} mock_resdef = rsrc_defn.ResourceDefinition("test_lb", LoadBalancerWithFakeClient, properties=props) mock_lb = lb.CloudLoadBalancer("test", mock_resdef, mock_stack) ex = self.assertRaises(exception.StackValidationFailed, mock_lb.validate) self.assertIn("HTTPS redirect is only available", six.text_type(ex)) def test_invalid_redir_ssl(self): mock_stack = mock.Mock() mock_stack.db_resource_get.return_value = None mock_stack.has_cache_data.return_value = False props = {'httpsRedirect': True, 'protocol': 'HTTP', 'port': 1234, 'nodes': [], 'virtualIps': [{'id': '1234'}]} mock_resdef = rsrc_defn.ResourceDefinition("test_lb", LoadBalancerWithFakeClient, properties=props) mock_lb = lb.CloudLoadBalancer("test", mock_resdef, mock_stack) ex = self.assertRaises(exception.StackValidationFailed, mock_lb.validate) self.assertIn("HTTPS redirect is only available", six.text_type(ex)) props['sslTermination'] = { 'secureTrafficOnly': False, 'securePort': 443, 'privatekey': "bobloblaw", 'certificate': 'mycert' } mock_lb = lb.CloudLoadBalancer("test", mock_resdef, mock_stack) ex = self.assertRaises(exception.StackValidationFailed, mock_lb.validate) self.assertIn("HTTPS redirect is only available", six.text_type(ex)) props['sslTermination'] = { 'secureTrafficOnly': True, 'securePort': 1234, 'privatekey': "bobloblaw", 'certificate': 'mycert' } mock_lb = lb.CloudLoadBalancer("test", mock_resdef, mock_stack) ex = self.assertRaises(exception.StackValidationFailed, mock_lb.validate) self.assertIn("HTTPS redirect is only available", six.text_type(ex)) def test_update_nodes_condition_draining(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) fake_lb.nodes = self.expected_body['nodes'] self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) expected_ip = '172.168.1.4' props['nodes'] = [ {"addresses": ["166.78.103.141"], "port": 80, "condition": "DRAINING", "type": "PRIMARY", "weight": 1}, {"addresses": [expected_ip], "port": 80, "condition": "DRAINING", "type": "PRIMARY", "weight": 1}] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb1, 'add_nodes') fake_lb1.add_nodes([ fake_lb1.Node(address=expected_ip, port=80, condition='DRAINING', type="PRIMARY", weight=1)]) fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.nodes = [ FakeNode(address=u"166.78.103.141", port=80, condition=u"DRAINING", type="PRIMARY", weight=1), FakeNode(address=u"172.168.1.4", port=80, condition=u"DRAINING", type="PRIMARY", weight=1), ] rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_nodes_add_same_address_different_port(self): rsrc, fake_lb = self._mock_loadbalancer(self.lb_template, self.lb_name, self.expected_body) fake_lb.nodes = self.expected_body['nodes'] fake_lb.tracker = "fake_lb" self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['nodes'] = [ {"addresses": ["166.78.103.141"], "port": 80, "condition": "ENABLED", "type": "PRIMARY", "weight": 1}, {"addresses": ["166.78.103.141"], "port": 81, "condition": "ENABLED", "type": "PRIMARY", "weight": 1}] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb1, 'add_nodes') fake_lb1.add_nodes([ fake_lb1.Node(address="166.78.103.141", port=81, condition='ENABLED', type="PRIMARY", weight=1)]) fake_lb1.tracker = "fake_lb1" fake_lb2 = copy.deepcopy(fake_lb) fake_lb2.nodes = [ FakeNode(address=u"166.78.103.141", port=80, condition=u"ENABLED", type="PRIMARY", weight=1), FakeNode(address=u"166.78.103.141", port=81, condition=u"ENABLED", type="PRIMARY", weight=1), ] fake_lb2.tracker = "fake_lb2" rsrc.clb.get(mox.IgnoreArg()).AndReturn(fake_lb2) self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() def test_update_nodes_defaults(self): template = copy.deepcopy(self.lb_template) lb_name = next(iter(template['Resources'])) tmpl_node = template['Resources'][lb_name]['Properties']['nodes'][0] tmpl_node['type'] = "PRIMARY" tmpl_node['condition'] = "ENABLED" tmpl_node['weight'] = 1 expected_body = copy.deepcopy(self.expected_body) expected_body['nodes'] = [FakeNode(address=u"166.78.103.141", port=80, condition=u"ENABLED", type="PRIMARY", weight=1)] rsrc, fake_lb = self._mock_loadbalancer(template, self.lb_name, expected_body) fake_lb.nodes = self.expected_body['nodes'] self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = copy.deepcopy(self.lb_props) props['nodes'] = [{"addresses": ["166.78.103.141"], "port": 80}] update_template = rsrc.t.freeze(properties=props) self.m.UnsetStubs() self.m.StubOutWithMock(rsrc.clb, 'get') fake_lb1 = copy.deepcopy(fake_lb) rsrc.clb.get(mox.IgnoreArg()).MultipleTimes().AndReturn(fake_lb1) self.m.StubOutWithMock(fake_lb1, 'add_nodes') self.m.ReplayAll() scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) self.m.VerifyAll() heat-10.0.0/contrib/rackspace/rackspace/resources/0000775000175100017510000000000013245512113022074 5ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/rackspace/resources/cloud_loadbalancer.py0000666000175100017510000012405113245511554026260 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import itertools from oslo_log import log as logging import six from heat.common import exception from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import function from heat.engine import properties from heat.engine import resource from heat.engine import support try: from pyrax.exceptions import NotFound # noqa PYRAX_INSTALLED = True except ImportError: # Setup fake exception for testing without pyrax class NotFound(Exception): pass PYRAX_INSTALLED = False LOG = logging.getLogger(__name__) def lb_immutable(exc): if 'immutable' in six.text_type(exc): return True return False class LoadbalancerBuildError(exception.HeatException): msg_fmt = _("There was an error building the loadbalancer:%(lb_name)s.") class CloudLoadBalancer(resource.Resource): """Represents a Rackspace Cloud Loadbalancer.""" support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) PROPERTIES = ( NAME, NODES, PROTOCOL, ACCESS_LIST, HALF_CLOSED, ALGORITHM, CONNECTION_LOGGING, METADATA, PORT, TIMEOUT, CONNECTION_THROTTLE, SESSION_PERSISTENCE, VIRTUAL_IPS, CONTENT_CACHING, HEALTH_MONITOR, SSL_TERMINATION, ERROR_PAGE, HTTPS_REDIRECT, ) = ( 'name', 'nodes', 'protocol', 'accessList', 'halfClosed', 'algorithm', 'connectionLogging', 'metadata', 'port', 'timeout', 'connectionThrottle', 'sessionPersistence', 'virtualIps', 'contentCaching', 'healthMonitor', 'sslTermination', 'errorPage', 'httpsRedirect', ) LB_UPDATE_PROPS = (NAME, ALGORITHM, PROTOCOL, HALF_CLOSED, PORT, TIMEOUT, HTTPS_REDIRECT) _NODE_KEYS = ( NODE_ADDRESSES, NODE_PORT, NODE_CONDITION, NODE_TYPE, NODE_WEIGHT, ) = ( 'addresses', 'port', 'condition', 'type', 'weight', ) _ACCESS_LIST_KEYS = ( ACCESS_LIST_ADDRESS, ACCESS_LIST_TYPE, ) = ( 'address', 'type', ) _CONNECTION_THROTTLE_KEYS = ( CONNECTION_THROTTLE_MAX_CONNECTION_RATE, CONNECTION_THROTTLE_MIN_CONNECTIONS, CONNECTION_THROTTLE_MAX_CONNECTIONS, CONNECTION_THROTTLE_RATE_INTERVAL, ) = ( 'maxConnectionRate', 'minConnections', 'maxConnections', 'rateInterval', ) _VIRTUAL_IP_KEYS = ( VIRTUAL_IP_TYPE, VIRTUAL_IP_IP_VERSION, VIRTUAL_IP_ID ) = ( 'type', 'ipVersion', 'id' ) _HEALTH_MONITOR_KEYS = ( HEALTH_MONITOR_ATTEMPTS_BEFORE_DEACTIVATION, HEALTH_MONITOR_DELAY, HEALTH_MONITOR_TIMEOUT, HEALTH_MONITOR_TYPE, HEALTH_MONITOR_BODY_REGEX, HEALTH_MONITOR_HOST_HEADER, HEALTH_MONITOR_PATH, HEALTH_MONITOR_STATUS_REGEX, ) = ( 'attemptsBeforeDeactivation', 'delay', 'timeout', 'type', 'bodyRegex', 'hostHeader', 'path', 'statusRegex', ) _HEALTH_MONITOR_CONNECT_KEYS = ( HEALTH_MONITOR_ATTEMPTS_BEFORE_DEACTIVATION, HEALTH_MONITOR_DELAY, HEALTH_MONITOR_TIMEOUT, HEALTH_MONITOR_TYPE, ) _SSL_TERMINATION_KEYS = ( SSL_TERMINATION_SECURE_PORT, SSL_TERMINATION_PRIVATEKEY, SSL_TERMINATION_CERTIFICATE, SSL_TERMINATION_INTERMEDIATE_CERTIFICATE, SSL_TERMINATION_SECURE_TRAFFIC_ONLY, ) = ( 'securePort', 'privatekey', 'certificate', 'intermediateCertificate', 'secureTrafficOnly', ) ATTRIBUTES = ( PUBLIC_IP, VIPS ) = ( 'PublicIp', 'virtualIps' ) ALGORITHMS = ["LEAST_CONNECTIONS", "RANDOM", "ROUND_ROBIN", "WEIGHTED_LEAST_CONNECTIONS", "WEIGHTED_ROUND_ROBIN"] _health_monitor_schema = { HEALTH_MONITOR_ATTEMPTS_BEFORE_DEACTIVATION: properties.Schema( properties.Schema.NUMBER, required=True, constraints=[ constraints.Range(1, 10), ] ), HEALTH_MONITOR_DELAY: properties.Schema( properties.Schema.NUMBER, required=True, constraints=[ constraints.Range(1, 3600), ] ), HEALTH_MONITOR_TIMEOUT: properties.Schema( properties.Schema.NUMBER, required=True, constraints=[ constraints.Range(1, 300), ] ), HEALTH_MONITOR_TYPE: properties.Schema( properties.Schema.STRING, required=True, constraints=[ constraints.AllowedValues(['CONNECT', 'HTTP', 'HTTPS']), ] ), HEALTH_MONITOR_BODY_REGEX: properties.Schema( properties.Schema.STRING ), HEALTH_MONITOR_HOST_HEADER: properties.Schema( properties.Schema.STRING ), HEALTH_MONITOR_PATH: properties.Schema( properties.Schema.STRING ), HEALTH_MONITOR_STATUS_REGEX: properties.Schema( properties.Schema.STRING ), } properties_schema = { NAME: properties.Schema( properties.Schema.STRING, update_allowed=True ), NODES: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ NODE_ADDRESSES: properties.Schema( properties.Schema.LIST, required=True, description=(_("IP addresses for the load balancer " "node. Must have at least one " "address.")), schema=properties.Schema( properties.Schema.STRING ) ), NODE_PORT: properties.Schema( properties.Schema.INTEGER, required=True ), NODE_CONDITION: properties.Schema( properties.Schema.STRING, default='ENABLED', constraints=[ constraints.AllowedValues(['ENABLED', 'DISABLED', 'DRAINING']), ] ), NODE_TYPE: properties.Schema( properties.Schema.STRING, default='PRIMARY', constraints=[ constraints.AllowedValues(['PRIMARY', 'SECONDARY']), ] ), NODE_WEIGHT: properties.Schema( properties.Schema.NUMBER, default=1, constraints=[ constraints.Range(1, 100), ] ), }, ), required=True, update_allowed=True ), PROTOCOL: properties.Schema( properties.Schema.STRING, required=True, constraints=[ constraints.AllowedValues(['DNS_TCP', 'DNS_UDP', 'FTP', 'HTTP', 'HTTPS', 'IMAPS', 'IMAPv4', 'LDAP', 'LDAPS', 'MYSQL', 'POP3', 'POP3S', 'SMTP', 'TCP', 'TCP_CLIENT_FIRST', 'UDP', 'UDP_STREAM', 'SFTP']), ], update_allowed=True ), ACCESS_LIST: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ ACCESS_LIST_ADDRESS: properties.Schema( properties.Schema.STRING, required=True ), ACCESS_LIST_TYPE: properties.Schema( properties.Schema.STRING, required=True, constraints=[ constraints.AllowedValues(['ALLOW', 'DENY']), ] ), }, ) ), HALF_CLOSED: properties.Schema( properties.Schema.BOOLEAN, update_allowed=True ), ALGORITHM: properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(ALGORITHMS) ], update_allowed=True ), CONNECTION_LOGGING: properties.Schema( properties.Schema.BOOLEAN, update_allowed=True ), METADATA: properties.Schema( properties.Schema.MAP, update_allowed=True ), PORT: properties.Schema( properties.Schema.INTEGER, required=True, update_allowed=True ), TIMEOUT: properties.Schema( properties.Schema.NUMBER, constraints=[ constraints.Range(1, 120), ], update_allowed=True ), CONNECTION_THROTTLE: properties.Schema( properties.Schema.MAP, schema={ CONNECTION_THROTTLE_MAX_CONNECTION_RATE: properties.Schema( properties.Schema.NUMBER, constraints=[ constraints.Range(0, 100000), ] ), CONNECTION_THROTTLE_MIN_CONNECTIONS: properties.Schema( properties.Schema.INTEGER, constraints=[ constraints.Range(1, 1000), ] ), CONNECTION_THROTTLE_MAX_CONNECTIONS: properties.Schema( properties.Schema.INTEGER, constraints=[ constraints.Range(1, 100000), ] ), CONNECTION_THROTTLE_RATE_INTERVAL: properties.Schema( properties.Schema.NUMBER, constraints=[ constraints.Range(1, 3600), ] ), }, update_allowed=True ), SESSION_PERSISTENCE: properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(['HTTP_COOKIE', 'SOURCE_IP']), ], update_allowed=True ), VIRTUAL_IPS: properties.Schema( properties.Schema.LIST, schema=properties.Schema( properties.Schema.MAP, schema={ VIRTUAL_IP_TYPE: properties.Schema( properties.Schema.STRING, "The type of VIP (public or internal). This property" " cannot be specified if 'id' is specified. This " "property must be specified if id is not specified.", constraints=[ constraints.AllowedValues(['SERVICENET', 'PUBLIC']), ] ), VIRTUAL_IP_IP_VERSION: properties.Schema( properties.Schema.STRING, "IP version of the VIP. This property cannot be " "specified if 'id' is specified. This property must " "be specified if id is not specified.", constraints=[ constraints.AllowedValues(['IPV6', 'IPV4']), ] ), VIRTUAL_IP_ID: properties.Schema( properties.Schema.NUMBER, "ID of a shared VIP to use instead of creating a " "new one. This property cannot be specified if type" " or version is specified." ) }, ), required=True, constraints=[ constraints.Length(min=1) ] ), CONTENT_CACHING: properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(['ENABLED', 'DISABLED']), ], update_allowed=True ), HEALTH_MONITOR: properties.Schema( properties.Schema.MAP, schema=_health_monitor_schema, update_allowed=True ), SSL_TERMINATION: properties.Schema( properties.Schema.MAP, schema={ SSL_TERMINATION_SECURE_PORT: properties.Schema( properties.Schema.INTEGER, default=443 ), SSL_TERMINATION_PRIVATEKEY: properties.Schema( properties.Schema.STRING, required=True ), SSL_TERMINATION_CERTIFICATE: properties.Schema( properties.Schema.STRING, required=True ), # only required if configuring intermediate ssl termination # add to custom validation SSL_TERMINATION_INTERMEDIATE_CERTIFICATE: properties.Schema( properties.Schema.STRING ), # pyrax will default to false SSL_TERMINATION_SECURE_TRAFFIC_ONLY: properties.Schema( properties.Schema.BOOLEAN, default=False ), }, update_allowed=True ), ERROR_PAGE: properties.Schema( properties.Schema.STRING, update_allowed=True ), HTTPS_REDIRECT: properties.Schema( properties.Schema.BOOLEAN, _("Enables or disables HTTP to HTTPS redirection for the load " "balancer. When enabled, any HTTP request returns status code " "301 (Moved Permanently), and the requester is redirected to " "the requested URL via the HTTPS protocol on port 443. Only " "available for HTTPS protocol (port=443), or HTTP protocol with " "a properly configured SSL termination (secureTrafficOnly=true, " "securePort=443)."), update_allowed=True, default=False, support_status=support.SupportStatus(version="2015.1") ) } attributes_schema = { PUBLIC_IP: attributes.Schema( _('Public IP address of the specified instance.') ), VIPS: attributes.Schema( _("A list of assigned virtual ip addresses") ) } ACTIVE_STATUS = 'ACTIVE' DELETED_STATUS = 'DELETED' PENDING_DELETE_STATUS = 'PENDING_DELETE' PENDING_UPDATE_STATUS = 'PENDING_UPDATE' def __init__(self, name, json_snippet, stack): super(CloudLoadBalancer, self).__init__(name, json_snippet, stack) self.clb = self.cloud_lb() def cloud_lb(self): return self.client('cloud_lb') def _setup_properties(self, properties, function): """Use defined schema properties as kwargs for loadbalancer objects.""" if properties and function: return [function(**self._remove_none(item_dict)) for item_dict in properties] elif function: return [function()] def _alter_properties_for_api(self): """Set up required, but useless, key/value pairs. The following properties have useless key/value pairs which must be passed into the api. Set them up to make template definition easier. """ session_persistence = None if self.SESSION_PERSISTENCE in self.properties.data: session_persistence = {'persistenceType': self.properties[self.SESSION_PERSISTENCE]} connection_logging = None if self.CONNECTION_LOGGING in self.properties.data: connection_logging = {"enabled": self.properties[self.CONNECTION_LOGGING]} metadata = None if self.METADATA in self.properties.data: metadata = [{'key': k, 'value': v} for k, v in six.iteritems(self.properties[self.METADATA])] return (session_persistence, connection_logging, metadata) def _check_active(self, lb=None): """Update the loadbalancer state, check the status.""" if not lb: lb = self.clb.get(self.resource_id) if lb.status == self.ACTIVE_STATUS: return True else: return False def _valid_HTTPS_redirect_with_HTTP_prot(self): """Determine if HTTPS redirect is valid when protocol is HTTP""" proto = self.properties[self.PROTOCOL] redir = self.properties[self.HTTPS_REDIRECT] termcfg = self.properties.get(self.SSL_TERMINATION) or {} seconly = termcfg.get(self.SSL_TERMINATION_SECURE_TRAFFIC_ONLY, False) secport = termcfg.get(self.SSL_TERMINATION_SECURE_PORT, 0) if (redir and (proto == "HTTP") and seconly and (secport == 443)): return True return False def _process_node(self, node): for addr in node.get(self.NODE_ADDRESSES, []): norm_node = copy.deepcopy(node) norm_node['address'] = addr del norm_node[self.NODE_ADDRESSES] yield norm_node def _process_nodes(self, node_list): node_itr = six.moves.map(self._process_node, node_list) return itertools.chain.from_iterable(node_itr) def _validate_https_redirect(self): redir = self.properties[self.HTTPS_REDIRECT] proto = self.properties[self.PROTOCOL] if (redir and (proto != "HTTPS") and not self._valid_HTTPS_redirect_with_HTTP_prot()): message = _("HTTPS redirect is only available for the HTTPS " "protocol (port=443), or the HTTP protocol with " "a properly configured SSL termination " "(secureTrafficOnly=true, securePort=443).") raise exception.StackValidationFailed(message=message) def handle_create(self): node_list = self._process_nodes(self.properties.get(self.NODES)) nodes = [self.clb.Node(**node) for node in node_list] vips = self.properties.get(self.VIRTUAL_IPS) virtual_ips = self._setup_properties(vips, self.clb.VirtualIP) (session_persistence, connection_logging, metadata ) = self._alter_properties_for_api() lb_body = { 'port': self.properties[self.PORT], 'protocol': self.properties[self.PROTOCOL], 'nodes': nodes, 'virtual_ips': virtual_ips, 'algorithm': self.properties.get(self.ALGORITHM), 'halfClosed': self.properties.get(self.HALF_CLOSED), 'connectionThrottle': self.properties.get( self.CONNECTION_THROTTLE), 'metadata': metadata, 'healthMonitor': self.properties.get(self.HEALTH_MONITOR), 'sessionPersistence': session_persistence, 'timeout': self.properties.get(self.TIMEOUT), 'connectionLogging': connection_logging, self.HTTPS_REDIRECT: self.properties[self.HTTPS_REDIRECT] } if self._valid_HTTPS_redirect_with_HTTP_prot(): lb_body[self.HTTPS_REDIRECT] = False self._validate_https_redirect() lb_name = (self.properties.get(self.NAME) or self.physical_resource_name()) LOG.debug("Creating loadbalancer: %s" % {lb_name: lb_body}) lb = self.clb.create(lb_name, **lb_body) self.resource_id_set(str(lb.id)) def check_create_complete(self, *args): lb = self.clb.get(self.resource_id) return (self._check_active(lb) and self._create_access_list(lb) and self._create_errorpage(lb) and self._create_ssl_term(lb) and self._create_redirect(lb) and self._create_cc(lb)) def _create_access_list(self, lb): if not self.properties[self.ACCESS_LIST]: return True old_access_list = lb.get_access_list() new_access_list = self.properties[self.ACCESS_LIST] if not self._access_list_needs_update(old_access_list, new_access_list): return True try: lb.add_access_list(new_access_list) except Exception as exc: if lb_immutable(exc): return False raise return False def _create_errorpage(self, lb): if not self.properties[self.ERROR_PAGE]: return True old_errorpage = lb.get_error_page() new_errorpage_content = self.properties[self.ERROR_PAGE] new_errorpage = {'errorpage': {'content': new_errorpage_content}} if not self._errorpage_needs_update(old_errorpage, new_errorpage): return True try: lb.set_error_page(new_errorpage_content) except Exception as exc: if lb_immutable(exc): return False raise return False def _create_ssl_term(self, lb): if not self.properties[self.SSL_TERMINATION]: return True old_ssl_term = lb.get_ssl_termination() new_ssl_term = self.properties[self.SSL_TERMINATION] if not self._ssl_term_needs_update(old_ssl_term, new_ssl_term): return True try: lb.add_ssl_termination(**new_ssl_term) except Exception as exc: if lb_immutable(exc): return False raise return False def _create_redirect(self, lb): if not self._valid_HTTPS_redirect_with_HTTP_prot(): return True old_redirect = lb.httpsRedirect new_redirect = self.properties[self.HTTPS_REDIRECT] if not self._redirect_needs_update(old_redirect, new_redirect): return True try: lb.update(httpsRedirect=True) except Exception as exc: if lb_immutable(exc): return False raise return False def _create_cc(self, lb): if not self.properties[self.CONTENT_CACHING]: return True old_cc = lb.content_caching new_cc = self.properties[self.CONTENT_CACHING] == 'ENABLED' if not self._cc_needs_update(old_cc, new_cc): return True try: lb.content_caching = new_cc except Exception as exc: if lb_immutable(exc): return False raise return False def handle_check(self): lb = self.clb.get(self.resource_id) if not self._check_active(): raise exception.Error(_("Cloud Loadbalancer is not ACTIVE " "(was: %s)") % lb.status) def handle_update(self, json_snippet, tmpl_diff, prop_diff): return prop_diff def check_update_complete(self, prop_diff): lb = self.clb.get(self.resource_id) return (lb.status != self.PENDING_UPDATE_STATUS and # lb immutable? self._update_props(lb, prop_diff) and self._update_nodes_add(lb, prop_diff) and self._update_nodes_delete(lb, prop_diff) and self._update_nodes_change(lb, prop_diff) and self._update_health_monitor(lb, prop_diff) and self._update_session_persistence(lb, prop_diff) and self._update_ssl_termination(lb, prop_diff) and self._update_metadata(lb, prop_diff) and self._update_errorpage(lb, prop_diff) and self._update_connection_logging(lb, prop_diff) and self._update_connection_throttle(lb, prop_diff) and self._update_content_caching(lb, prop_diff)) def _nodes_need_update_add(self, old, new): if not old: return True new = list(self._process_nodes(new)) new_nodes = ["%s%s" % (x['address'], x['port']) for x in new] old_nodes = ["%s%s" % (x.address, x.port) for x in old] for node in new_nodes: if node not in old_nodes: return True return False def _nodes_need_update_delete(self, old, new): if not new: return True new = list(self._process_nodes(new)) new_nodes = ["%s%s" % (x['address'], x['port']) for x in new] old_nodes = ["%s%s" % (x.address, x.port) for x in old] for node in old_nodes: if node not in new_nodes: return True return False def _nodes_need_update_change(self, old, new): def find_node(nodes, address, port): for node in nodes: if node['address'] == address and node['port'] == port: return node new = list(self._process_nodes(new)) for old_node in old: new_node = find_node(new, old_node.address, old_node.port) if (new_node['condition'] != old_node.condition or new_node['type'] != old_node.type or new_node['weight'] != old_node.weight): return True return False def _needs_update_comparison(self, old, new): if old != new: return True return False def _needs_update_comparison_bool(self, old, new): if new is None: return old return self._needs_update_comparison(old, new) def _needs_update_comparison_nullable(self, old, new): if not old and not new: return False return self._needs_update_comparison(old, new) def _props_need_update(self, old, new): return self._needs_update_comparison_nullable(old, new) # dict def _hm_needs_update(self, old, new): return self._needs_update_comparison_nullable(old, new) # dict def _sp_needs_update(self, old, new): return self._needs_update_comparison_bool(old, new) # bool def _metadata_needs_update(self, old, new): return self._needs_update_comparison_nullable(old, new) # dict def _errorpage_needs_update(self, old, new): return self._needs_update_comparison_nullable(old, new) # str def _cl_needs_update(self, old, new): return self._needs_update_comparison_bool(old, new) # bool def _ct_needs_update(self, old, new): return self._needs_update_comparison_nullable(old, new) # dict def _cc_needs_update(self, old, new): return self._needs_update_comparison_bool(old, new) # bool def _ssl_term_needs_update(self, old, new): if new is None: return self._needs_update_comparison_nullable( old, new) # dict # check all relevant keys if (old.get(self.SSL_TERMINATION_SECURE_PORT) != new[self.SSL_TERMINATION_SECURE_PORT]): return True if (old.get(self.SSL_TERMINATION_SECURE_TRAFFIC_ONLY) != new[self.SSL_TERMINATION_SECURE_TRAFFIC_ONLY]): return True if (old.get(self.SSL_TERMINATION_CERTIFICATE, '').strip() != new.get(self.SSL_TERMINATION_CERTIFICATE, '').strip()): return True if (new.get(self.SSL_TERMINATION_INTERMEDIATE_CERTIFICATE, '') and (old.get(self.SSL_TERMINATION_INTERMEDIATE_CERTIFICATE, '').strip() != new.get(self.SSL_TERMINATION_INTERMEDIATE_CERTIFICATE, '').strip())): return True return False def _access_list_needs_update(self, old, new): old = [{key: al[key] for key in self._ACCESS_LIST_KEYS} for al in old] old = set([frozenset(s.items()) for s in old]) new = set([frozenset(s.items()) for s in new]) return old != new def _redirect_needs_update(self, old, new): return self._needs_update_comparison_bool(old, new) # bool def _update_props(self, lb, prop_diff): old_props = {} new_props = {} for prop in prop_diff: if prop in self.LB_UPDATE_PROPS: old_props[prop] = getattr(lb, prop) new_props[prop] = prop_diff[prop] if new_props and self._props_need_update(old_props, new_props): try: lb.update(**new_props) except Exception as exc: if lb_immutable(exc): return False raise return False return True def _nodes_update_data(self, lb, prop_diff): current_nodes = lb.nodes diff_nodes = self._process_nodes(prop_diff[self.NODES]) # Loadbalancers can be uniquely identified by address and # port. Old is a dict of all nodes the loadbalancer # currently knows about. old = dict(("{0.address}{0.port}".format(node), node) for node in current_nodes) # New is a dict of the nodes the loadbalancer will know # about after this update. new = dict(("%s%s" % (node["address"], node[self.NODE_PORT]), node) for node in diff_nodes) old_set = set(old) new_set = set(new) deleted = old_set.difference(new_set) added = new_set.difference(old_set) updated = new_set.intersection(old_set) return old, new, deleted, added, updated def _update_nodes_add(self, lb, prop_diff): """Add loadbalancers in the new map that are not in the old map.""" if self.NODES not in prop_diff: return True old_nodes = lb.nodes if hasattr(lb, self.NODES) else None new_nodes = prop_diff[self.NODES] if not self._nodes_need_update_add(old_nodes, new_nodes): return True old, new, deleted, added, updated = self._nodes_update_data(lb, prop_diff) new_nodes = [self.clb.Node(**new[lb_node]) for lb_node in added] if new_nodes: try: lb.add_nodes(new_nodes) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_nodes_delete(self, lb, prop_diff): """Delete loadbalancers in the old dict that aren't in the new dict.""" if self.NODES not in prop_diff: return True old_nodes = lb.nodes if hasattr(lb, self.NODES) else None new_nodes = prop_diff[self.NODES] if not self._nodes_need_update_delete(old_nodes, new_nodes): return True old, new, deleted, added, updated = self._nodes_update_data(lb, prop_diff) for node in deleted: try: old[node].delete() except Exception as exc: if lb_immutable(exc): return False raise return False def _update_nodes_change(self, lb, prop_diff): """Update nodes that have been changed.""" if self.NODES not in prop_diff: return True old_nodes = lb.nodes if hasattr(lb, self.NODES) else None new_nodes = prop_diff[self.NODES] if not self._nodes_need_update_change(old_nodes, new_nodes): return True old, new, deleted, added, updated = self._nodes_update_data(lb, prop_diff) for node in updated: node_changed = False for attribute, new_value in new[node].items(): if new_value and new_value != getattr(old[node], attribute): node_changed = True setattr(old[node], attribute, new_value) if node_changed: try: old[node].update() except Exception as exc: if lb_immutable(exc): return False raise return False def _update_health_monitor(self, lb, prop_diff): if self.HEALTH_MONITOR not in prop_diff: return True old_hm = lb.get_health_monitor() new_hm = prop_diff[self.HEALTH_MONITOR] if not self._hm_needs_update(old_hm, new_hm): return True try: if new_hm is None: lb.delete_health_monitor() else: # Adding a health monitor is a destructive, so there's # no need to delete, then add lb.add_health_monitor(**new_hm) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_session_persistence(self, lb, prop_diff): if self.SESSION_PERSISTENCE not in prop_diff: return True old_sp = lb.session_persistence new_sp = prop_diff[self.SESSION_PERSISTENCE] if not self._sp_needs_update(old_sp, new_sp): return True try: if new_sp is None: lb.session_persistence = '' else: # Adding session persistence is destructive lb.session_persistence = new_sp except Exception as exc: if lb_immutable(exc): return False raise return False def _update_ssl_termination(self, lb, prop_diff): if self.SSL_TERMINATION not in prop_diff: return True old_ssl_term = lb.get_ssl_termination() new_ssl_term = prop_diff[self.SSL_TERMINATION] if not self._ssl_term_needs_update(old_ssl_term, new_ssl_term): return True try: if new_ssl_term is None: lb.delete_ssl_termination() else: # Adding SSL termination is destructive lb.add_ssl_termination(**new_ssl_term) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_metadata(self, lb, prop_diff): if self.METADATA not in prop_diff: return True old_metadata = lb.get_metadata() new_metadata = prop_diff[self.METADATA] if not self._metadata_needs_update(old_metadata, new_metadata): return True try: if new_metadata is None: lb.delete_metadata() else: lb.set_metadata(new_metadata) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_errorpage(self, lb, prop_diff): if self.ERROR_PAGE not in prop_diff: return True old_errorpage = lb.get_error_page()['errorpage']['content'] new_errorpage = prop_diff[self.ERROR_PAGE] if not self._errorpage_needs_update(old_errorpage, new_errorpage): return True try: if new_errorpage is None: lb.clear_error_page() else: lb.set_error_page(new_errorpage) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_connection_logging(self, lb, prop_diff): if self.CONNECTION_LOGGING not in prop_diff: return True old_cl = lb.connection_logging new_cl = prop_diff[self.CONNECTION_LOGGING] if not self._cl_needs_update(old_cl, new_cl): return True try: if new_cl: lb.connection_logging = True else: lb.connection_logging = False except Exception as exc: if lb_immutable(exc): return False raise return False def _update_connection_throttle(self, lb, prop_diff): if self.CONNECTION_THROTTLE not in prop_diff: return True old_ct = lb.get_connection_throttle() new_ct = prop_diff[self.CONNECTION_THROTTLE] if not self._ct_needs_update(old_ct, new_ct): return True try: if new_ct is None: lb.delete_connection_throttle() else: lb.add_connection_throttle(**new_ct) except Exception as exc: if lb_immutable(exc): return False raise return False def _update_content_caching(self, lb, prop_diff): if self.CONTENT_CACHING not in prop_diff: return True old_cc = lb.content_caching new_cc = prop_diff[self.CONTENT_CACHING] == 'ENABLED' if not self._cc_needs_update(old_cc, new_cc): return True try: lb.content_caching = new_cc except Exception as exc: if lb_immutable(exc): return False raise return False def check_delete_complete(self, *args): if self.resource_id is None: return True try: loadbalancer = self.clb.get(self.resource_id) except NotFound: return True if loadbalancer.status == self.DELETED_STATUS: return True elif loadbalancer.status == self.PENDING_DELETE_STATUS: return False else: try: loadbalancer.delete() except Exception as exc: if lb_immutable(exc): return False raise return False def _remove_none(self, property_dict): """Remove None values that would cause schema validation problems. These are values that may be initialized to None. """ return dict((key, value) for (key, value) in six.iteritems(property_dict) if value is not None) def validate(self): """Validate any of the provided params.""" res = super(CloudLoadBalancer, self).validate() if res: return res if self.properties.get(self.HALF_CLOSED): if not (self.properties[self.PROTOCOL] == 'TCP' or self.properties[self.PROTOCOL] == 'TCP_CLIENT_FIRST'): message = (_('The %s property is only available for the TCP ' 'or TCP_CLIENT_FIRST protocols') % self.HALF_CLOSED) raise exception.StackValidationFailed(message=message) # health_monitor connect and http types require completely different # schema if self.properties.get(self.HEALTH_MONITOR): prop_val = self.properties[self.HEALTH_MONITOR] health_monitor = self._remove_none(prop_val) schema = self._health_monitor_schema if health_monitor[self.HEALTH_MONITOR_TYPE] == 'CONNECT': schema = dict((k, v) for k, v in schema.items() if k in self._HEALTH_MONITOR_CONNECT_KEYS) properties.Properties(schema, health_monitor, function.resolve, self.name).validate() # validate if HTTPS_REDIRECT is true self._validate_https_redirect() # if a vip specifies and id, it can't specify version or type; # otherwise version and type are required for vip in self.properties[self.VIRTUAL_IPS]: has_id = vip.get(self.VIRTUAL_IP_ID) is not None has_version = vip.get(self.VIRTUAL_IP_IP_VERSION) is not None has_type = vip.get(self.VIRTUAL_IP_TYPE) is not None if has_id: if (has_version or has_type): message = _("Cannot specify type or version if VIP id is" " specified.") raise exception.StackValidationFailed(message=message) elif not (has_version and has_type): message = _("Must specify VIP type and version if no id " "specified.") raise exception.StackValidationFailed(message=message) def _public_ip(self, lb): for ip in lb.virtual_ips: if ip.type == 'PUBLIC': return six.text_type(ip.address) def _resolve_attribute(self, key): if self.resource_id: lb = self.clb.get(self.resource_id) attribute_function = { self.PUBLIC_IP: self._public_ip(lb), self.VIPS: [{"id": vip.id, "type": vip.type, "ip_version": vip.ip_version, "address": vip.address} for vip in lb.virtual_ips] } if key not in attribute_function: raise exception.InvalidTemplateAttribute(resource=self.name, key=key) function = attribute_function[key] LOG.info('%(name)s.GetAtt(%(key)s) == %(function)s', {'name': self.name, 'key': key, 'function': function}) return function def resource_mapping(): return {'Rackspace::Cloud::LoadBalancer': CloudLoadBalancer} def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/contrib/rackspace/rackspace/resources/auto_scale.py0000666000175100017510000007433713245511554024615 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Resources for Rackspace Auto Scale.""" import copy import six from heat.common import exception from heat.common.i18n import _ from heat.common import template_format from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support from heat.engine import template as templatem try: from pyrax.exceptions import Forbidden from pyrax.exceptions import NotFound PYRAX_INSTALLED = True except ImportError: class Forbidden(Exception): """Dummy pyrax exception - only used for testing.""" class NotFound(Exception): """Dummy pyrax exception - only used for testing.""" PYRAX_INSTALLED = False class Group(resource.Resource): """Represents a scaling group.""" # pyrax differs drastically from the actual Auto Scale API. We'll prefer # the true API here, but since pyrax doesn't support the full flexibility # of the API, we'll have to restrict what users can provide. support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) # properties are identical to the API POST /groups. PROPERTIES = ( GROUP_CONFIGURATION, LAUNCH_CONFIGURATION, ) = ( 'groupConfiguration', 'launchConfiguration', ) _GROUP_CONFIGURATION_KEYS = ( GROUP_CONFIGURATION_MAX_ENTITIES, GROUP_CONFIGURATION_COOLDOWN, GROUP_CONFIGURATION_NAME, GROUP_CONFIGURATION_MIN_ENTITIES, GROUP_CONFIGURATION_METADATA, ) = ( 'maxEntities', 'cooldown', 'name', 'minEntities', 'metadata', ) _LAUNCH_CONFIG_KEYS = ( LAUNCH_CONFIG_ARGS, LAUNCH_CONFIG_TYPE, ) = ( 'args', 'type', ) _LAUNCH_CONFIG_ARGS_KEYS = ( LAUNCH_CONFIG_ARGS_LOAD_BALANCERS, LAUNCH_CONFIG_ARGS_SERVER, LAUNCH_CONFIG_ARGS_STACK, ) = ( 'loadBalancers', 'server', 'stack', ) _LAUNCH_CONFIG_ARGS_LOAD_BALANCER_KEYS = ( LAUNCH_CONFIG_ARGS_LOAD_BALANCER_ID, LAUNCH_CONFIG_ARGS_LOAD_BALANCER_PORT, ) = ( 'loadBalancerId', 'port', ) _LAUNCH_CONFIG_ARGS_SERVER_KEYS = ( LAUNCH_CONFIG_ARGS_SERVER_NAME, LAUNCH_CONFIG_ARGS_SERVER_FLAVOR_REF, LAUNCH_CONFIG_ARGS_SERVER_IMAGE_REF, LAUNCH_CONFIG_ARGS_SERVER_METADATA, LAUNCH_CONFIG_ARGS_SERVER_PERSONALITY, LAUNCH_CONFIG_ARGS_SERVER_NETWORKS, LAUNCH_CONFIG_ARGS_SERVER_DISK_CONFIG, LAUNCH_CONFIG_ARGS_SERVER_KEY_NAME, LAUNCH_CONFIG_ARGS_SERVER_USER_DATA, LAUNCH_CONFIG_ARGS_SERVER_CDRIVE ) = ( 'name', 'flavorRef', 'imageRef', 'metadata', 'personality', 'networks', 'diskConfig', # technically maps to OS-DCF:diskConfig 'key_name', 'user_data', 'config_drive' ) _LAUNCH_CONFIG_ARGS_SERVER_NETWORK_KEYS = ( LAUNCH_CONFIG_ARGS_SERVER_NETWORK_UUID, ) = ( 'uuid', ) _LAUNCH_CONFIG_ARGS_STACK_KEYS = ( LAUNCH_CONFIG_ARGS_STACK_TEMPLATE, LAUNCH_CONFIG_ARGS_STACK_TEMPLATE_URL, LAUNCH_CONFIG_ARGS_STACK_DISABLE_ROLLBACK, LAUNCH_CONFIG_ARGS_STACK_ENVIRONMENT, LAUNCH_CONFIG_ARGS_STACK_FILES, LAUNCH_CONFIG_ARGS_STACK_PARAMETERS, LAUNCH_CONFIG_ARGS_STACK_TIMEOUT_MINS ) = ( 'template', 'template_url', 'disable_rollback', 'environment', 'files', 'parameters', 'timeout_mins' ) _launch_configuration_args_schema = { LAUNCH_CONFIG_ARGS_LOAD_BALANCERS: properties.Schema( properties.Schema.LIST, _('List of load balancers to hook the ' 'server up to. If not specified, no ' 'load balancing will be configured.'), default=[], schema=properties.Schema( properties.Schema.MAP, schema={ LAUNCH_CONFIG_ARGS_LOAD_BALANCER_ID: properties.Schema( properties.Schema.STRING, _('ID of the load balancer.'), required=True ), LAUNCH_CONFIG_ARGS_LOAD_BALANCER_PORT: properties.Schema( properties.Schema.INTEGER, _('Server port to connect the load balancer to.') ), }, ) ), LAUNCH_CONFIG_ARGS_SERVER: properties.Schema( properties.Schema.MAP, _('Server creation arguments, as accepted by the Cloud Servers ' 'server creation API.'), required=False, schema={ LAUNCH_CONFIG_ARGS_SERVER_NAME: properties.Schema( properties.Schema.STRING, _('Server name.'), required=True ), LAUNCH_CONFIG_ARGS_SERVER_FLAVOR_REF: properties.Schema( properties.Schema.STRING, _('The ID or name of the flavor to boot onto.'), constraints=[ constraints.CustomConstraint('nova.flavor') ], required=True ), LAUNCH_CONFIG_ARGS_SERVER_IMAGE_REF: properties.Schema( properties.Schema.STRING, _('The ID or name of the image to boot with.'), constraints=[ constraints.CustomConstraint('glance.image') ], required=True ), LAUNCH_CONFIG_ARGS_SERVER_METADATA: properties.Schema( properties.Schema.MAP, _('Metadata key and value pairs.') ), LAUNCH_CONFIG_ARGS_SERVER_PERSONALITY: properties.Schema( properties.Schema.MAP, _('File path and contents.') ), LAUNCH_CONFIG_ARGS_SERVER_CDRIVE: properties.Schema( properties.Schema.BOOLEAN, _('Enable config drive on the instance.') ), LAUNCH_CONFIG_ARGS_SERVER_USER_DATA: properties.Schema( properties.Schema.STRING, _('User data for bootstrapping the instance.') ), LAUNCH_CONFIG_ARGS_SERVER_NETWORKS: properties.Schema( properties.Schema.LIST, _('Networks to attach to. If unspecified, the instance ' 'will be attached to the public Internet and private ' 'ServiceNet networks.'), schema=properties.Schema( properties.Schema.MAP, schema={ LAUNCH_CONFIG_ARGS_SERVER_NETWORK_UUID: properties.Schema( properties.Schema.STRING, _('UUID of network to attach to.'), required=True) } ) ), LAUNCH_CONFIG_ARGS_SERVER_DISK_CONFIG: properties.Schema( properties.Schema.STRING, _('Configuration specifying the partition layout. AUTO to ' 'create a partition utilizing the entire disk, and ' 'MANUAL to create a partition matching the source ' 'image.'), constraints=[ constraints.AllowedValues(['AUTO', 'MANUAL']), ] ), LAUNCH_CONFIG_ARGS_SERVER_KEY_NAME: properties.Schema( properties.Schema.STRING, _('Name of a previously created SSH keypair to allow ' 'key-based authentication to the server.') ), }, ), LAUNCH_CONFIG_ARGS_STACK: properties.Schema( properties.Schema.MAP, _('The attributes that Auto Scale uses to create a new stack. The ' 'attributes that you specify for the stack entity apply to all ' 'new stacks in the scaling group. Note the stack arguments are ' 'directly passed to Heat when creating a stack.'), schema={ LAUNCH_CONFIG_ARGS_STACK_TEMPLATE: properties.Schema( properties.Schema.STRING, _('The template that describes the stack. Either the ' 'template or template_url property must be specified.'), ), LAUNCH_CONFIG_ARGS_STACK_TEMPLATE_URL: properties.Schema( properties.Schema.STRING, _('A URI to a template. Either the template or ' 'template_url property must be specified.') ), LAUNCH_CONFIG_ARGS_STACK_DISABLE_ROLLBACK: properties.Schema( properties.Schema.BOOLEAN, _('Keep the resources that have been created if the stack ' 'fails to create. Defaults to True.'), default=True ), LAUNCH_CONFIG_ARGS_STACK_ENVIRONMENT: properties.Schema( properties.Schema.MAP, _('The environment for the stack.'), ), LAUNCH_CONFIG_ARGS_STACK_FILES: properties.Schema( properties.Schema.MAP, _('The contents of files that the template references.') ), LAUNCH_CONFIG_ARGS_STACK_PARAMETERS: properties.Schema( properties.Schema.MAP, _('Key/value pairs of the parameters and their values to ' 'pass to the parameters in the template.') ), LAUNCH_CONFIG_ARGS_STACK_TIMEOUT_MINS: properties.Schema( properties.Schema.INTEGER, _('The stack creation timeout in minutes.') ) } ) } properties_schema = { GROUP_CONFIGURATION: properties.Schema( properties.Schema.MAP, _('Group configuration.'), schema={ GROUP_CONFIGURATION_MAX_ENTITIES: properties.Schema( properties.Schema.INTEGER, _('Maximum number of entities in this scaling group.'), required=True ), GROUP_CONFIGURATION_COOLDOWN: properties.Schema( properties.Schema.NUMBER, _('Number of seconds after capacity changes during ' 'which further capacity changes are disabled.'), required=True ), GROUP_CONFIGURATION_NAME: properties.Schema( properties.Schema.STRING, _('Name of the scaling group.'), required=True ), GROUP_CONFIGURATION_MIN_ENTITIES: properties.Schema( properties.Schema.INTEGER, _('Minimum number of entities in this scaling group.'), required=True ), GROUP_CONFIGURATION_METADATA: properties.Schema( properties.Schema.MAP, _('Arbitrary key/value metadata to associate with ' 'this group.') ), }, required=True, update_allowed=True ), LAUNCH_CONFIGURATION: properties.Schema( properties.Schema.MAP, _('Launch configuration.'), schema={ LAUNCH_CONFIG_ARGS: properties.Schema( properties.Schema.MAP, _('Type-specific launch arguments.'), schema=_launch_configuration_args_schema, required=True ), LAUNCH_CONFIG_TYPE: properties.Schema( properties.Schema.STRING, _('Launch configuration method. Only launch_server and ' 'launch_stack are currently supported.'), required=True, constraints=[ constraints.AllowedValues(['launch_server', 'launch_stack']), ] ), }, required=True, update_allowed=True ), # We don't allow scaling policies to be specified here, despite the # fact that the API supports it. Users should use the ScalingPolicy # resource. } def _get_group_config_args(self, groupconf): """Get the groupConfiguration-related pyrax arguments.""" return dict( name=groupconf[self.GROUP_CONFIGURATION_NAME], cooldown=groupconf[self.GROUP_CONFIGURATION_COOLDOWN], min_entities=groupconf[self.GROUP_CONFIGURATION_MIN_ENTITIES], max_entities=groupconf[self.GROUP_CONFIGURATION_MAX_ENTITIES], metadata=groupconf.get(self.GROUP_CONFIGURATION_METADATA, None)) def _get_launch_config_server_args(self, launchconf): lcargs = launchconf[self.LAUNCH_CONFIG_ARGS] server_args = lcargs[self.LAUNCH_CONFIG_ARGS_SERVER] lb_args = lcargs.get(self.LAUNCH_CONFIG_ARGS_LOAD_BALANCERS) lbs = copy.deepcopy(lb_args) for lb in lbs: # if the port is not specified, the lbid must be that of a # RackConnectV3 lb pool. if not lb[self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_PORT]: del lb[self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_PORT] continue lbid = int(lb[self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_ID]) lb[self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_ID] = lbid personality = server_args.get( self.LAUNCH_CONFIG_ARGS_SERVER_PERSONALITY) if personality: personality = [{'path': k, 'contents': v} for k, v in personality.items()] user_data = server_args.get(self.LAUNCH_CONFIG_ARGS_SERVER_USER_DATA) cdrive = (server_args.get(self.LAUNCH_CONFIG_ARGS_SERVER_CDRIVE) or bool(user_data is not None and len(user_data.strip()))) image_id = self.client_plugin('glance').find_image_by_name_or_id( server_args[self.LAUNCH_CONFIG_ARGS_SERVER_IMAGE_REF]) flavor_id = self.client_plugin('nova').find_flavor_by_name_or_id( server_args[self.LAUNCH_CONFIG_ARGS_SERVER_FLAVOR_REF]) return dict( launch_config_type=launchconf[self.LAUNCH_CONFIG_TYPE], server_name=server_args[self.GROUP_CONFIGURATION_NAME], image=image_id, flavor=flavor_id, disk_config=server_args.get( self.LAUNCH_CONFIG_ARGS_SERVER_DISK_CONFIG), metadata=server_args.get(self.GROUP_CONFIGURATION_METADATA), config_drive=cdrive, user_data=user_data, personality=personality, networks=server_args.get(self.LAUNCH_CONFIG_ARGS_SERVER_NETWORKS), load_balancers=lbs, key_name=server_args.get(self.LAUNCH_CONFIG_ARGS_SERVER_KEY_NAME), ) def _get_launch_config_stack_args(self, launchconf): lcargs = launchconf[self.LAUNCH_CONFIG_ARGS] stack_args = lcargs[self.LAUNCH_CONFIG_ARGS_STACK] return dict( launch_config_type=launchconf[self.LAUNCH_CONFIG_TYPE], template=stack_args[self.LAUNCH_CONFIG_ARGS_STACK_TEMPLATE], template_url=stack_args[ self.LAUNCH_CONFIG_ARGS_STACK_TEMPLATE_URL], disable_rollback=stack_args[ self.LAUNCH_CONFIG_ARGS_STACK_DISABLE_ROLLBACK], environment=stack_args[self.LAUNCH_CONFIG_ARGS_STACK_ENVIRONMENT], files=stack_args[self.LAUNCH_CONFIG_ARGS_STACK_FILES], parameters=stack_args[self.LAUNCH_CONFIG_ARGS_STACK_PARAMETERS], timeout_mins=stack_args[self.LAUNCH_CONFIG_ARGS_STACK_TIMEOUT_MINS] ) def _get_launch_config_args(self, launchconf): """Get the launchConfiguration-related pyrax arguments.""" if launchconf[self.LAUNCH_CONFIG_ARGS].get( self.LAUNCH_CONFIG_ARGS_SERVER): return self._get_launch_config_server_args(launchconf) else: return self._get_launch_config_stack_args(launchconf) def _get_create_args(self): """Get pyrax-style arguments for creating a scaling group.""" args = self._get_group_config_args( self.properties[self.GROUP_CONFIGURATION]) args['group_metadata'] = args.pop('metadata') args.update(self._get_launch_config_args( self.properties[self.LAUNCH_CONFIGURATION])) return args def handle_create(self): """Create the autoscaling group and set resource_id. The resource_id is set to the resulting group's ID. """ asclient = self.auto_scale() group = asclient.create(**self._get_create_args()) self.resource_id_set(str(group.id)) def handle_check(self): self.auto_scale().get(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update the group configuration and the launch configuration.""" asclient = self.auto_scale() if self.GROUP_CONFIGURATION in prop_diff: args = self._get_group_config_args( prop_diff[self.GROUP_CONFIGURATION]) asclient.replace(self.resource_id, **args) if self.LAUNCH_CONFIGURATION in prop_diff: args = self._get_launch_config_args( prop_diff[self.LAUNCH_CONFIGURATION]) asclient.replace_launch_config(self.resource_id, **args) def handle_delete(self): """Delete the scaling group. Since Auto Scale doesn't allow deleting a group until all its servers are gone, we must set the minEntities and maxEntities of the group to 0 and then keep trying the delete until Auto Scale has deleted all the servers and the delete will succeed. """ if self.resource_id is None: return asclient = self.auto_scale() args = self._get_group_config_args( self.properties[self.GROUP_CONFIGURATION]) args['min_entities'] = 0 args['max_entities'] = 0 try: asclient.replace(self.resource_id, **args) except NotFound: pass def check_delete_complete(self, result): """Try the delete operation until it succeeds.""" if self.resource_id is None: return True try: self.auto_scale().delete(self.resource_id) except Forbidden: return False except NotFound: return True else: return True def _check_rackconnect_v3_pool_exists(self, pool_id): pools = self.client("rackconnect").list_load_balancer_pools() if pool_id in (p.id for p in pools): return True return False def validate(self): super(Group, self).validate() launchconf = self.properties[self.LAUNCH_CONFIGURATION] lcargs = launchconf[self.LAUNCH_CONFIG_ARGS] server_args = lcargs.get(self.LAUNCH_CONFIG_ARGS_SERVER) st_args = lcargs.get(self.LAUNCH_CONFIG_ARGS_STACK) # launch_server and launch_stack are required and mutually exclusive. if ((not server_args and not st_args) or (server_args and st_args)): msg = (_('Must provide one of %(server)s or %(stack)s in %(conf)s') % {'server': self.LAUNCH_CONFIG_ARGS_SERVER, 'stack': self.LAUNCH_CONFIG_ARGS_STACK, 'conf': self.LAUNCH_CONFIGURATION}) raise exception.StackValidationFailed(msg) lb_args = lcargs.get(self.LAUNCH_CONFIG_ARGS_LOAD_BALANCERS) lbs = copy.deepcopy(lb_args) for lb in lbs: lb_port = lb.get(self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_PORT) lb_id = lb[self.LAUNCH_CONFIG_ARGS_LOAD_BALANCER_ID] if not lb_port: # check if lb id is a valid RCV3 pool id if not self._check_rackconnect_v3_pool_exists(lb_id): msg = _('Could not find RackConnectV3 pool ' 'with id %s') % (lb_id) raise exception.StackValidationFailed(msg) if st_args: st_tmpl = st_args.get(self.LAUNCH_CONFIG_ARGS_STACK_TEMPLATE) st_tmpl_url = st_args.get( self.LAUNCH_CONFIG_ARGS_STACK_TEMPLATE_URL) st_env = st_args.get(self.LAUNCH_CONFIG_ARGS_STACK_ENVIRONMENT) # template and template_url are required and mutually exclusive. if ((not st_tmpl and not st_tmpl_url) or (st_tmpl and st_tmpl_url)): msg = _('Must provide one of template or template_url.') raise exception.StackValidationFailed(msg) if st_tmpl: st_files = st_args.get(self.LAUNCH_CONFIG_ARGS_STACK_FILES) try: tmpl = template_format.simple_parse(st_tmpl) templatem.Template(tmpl, files=st_files, env=st_env) except Exception as exc: msg = (_('Encountered error while loading template: %s') % six.text_type(exc)) raise exception.StackValidationFailed(msg) def auto_scale(self): return self.client('auto_scale') class ScalingPolicy(resource.Resource): """Represents a Rackspace Auto Scale scaling policy.""" support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) PROPERTIES = ( GROUP, NAME, CHANGE, CHANGE_PERCENT, DESIRED_CAPACITY, COOLDOWN, TYPE, ARGS, ) = ( 'group', 'name', 'change', 'changePercent', 'desiredCapacity', 'cooldown', 'type', 'args', ) properties_schema = { # group isn't in the post body, but it's in the URL to post to. GROUP: properties.Schema( properties.Schema.STRING, _('Scaling group ID that this policy belongs to.'), required=True ), NAME: properties.Schema( properties.Schema.STRING, _('Name of this scaling policy.'), required=True, update_allowed=True ), CHANGE: properties.Schema( properties.Schema.INTEGER, _('Amount to add to or remove from current number of instances. ' 'Incompatible with changePercent and desiredCapacity.'), update_allowed=True ), CHANGE_PERCENT: properties.Schema( properties.Schema.NUMBER, _('Percentage-based change to add or remove from current number ' 'of instances. Incompatible with change and desiredCapacity.'), update_allowed=True ), DESIRED_CAPACITY: properties.Schema( properties.Schema.INTEGER, _('Absolute number to set the number of instances to. ' 'Incompatible with change and changePercent.'), update_allowed=True ), COOLDOWN: properties.Schema( properties.Schema.NUMBER, _('Number of seconds after a policy execution during which ' 'further executions are disabled.'), update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, _('Type of this scaling policy. Specifies how the policy is ' 'executed.'), required=True, constraints=[ constraints.AllowedValues(['webhook', 'schedule', 'cloud_monitoring']), ], update_allowed=True ), ARGS: properties.Schema( properties.Schema.MAP, _('Type-specific arguments for the policy.'), update_allowed=True ), } def _get_args(self, properties): """Get pyrax-style create arguments for scaling policies.""" args = dict( scaling_group=properties[self.GROUP], name=properties[self.NAME], policy_type=properties[self.TYPE], cooldown=properties[self.COOLDOWN], ) if properties.get(self.CHANGE) is not None: args['change'] = properties[self.CHANGE] elif properties.get(self.CHANGE_PERCENT) is not None: args['change'] = properties[self.CHANGE_PERCENT] args['is_percent'] = True elif properties.get(self.DESIRED_CAPACITY) is not None: args['desired_capacity'] = properties[self.DESIRED_CAPACITY] if properties.get(self.ARGS) is not None: args['args'] = properties[self.ARGS] return args def handle_create(self): """Create the scaling policy and initialize the resource ID. The resource ID is initialized to {group_id}:{policy_id}. """ asclient = self.auto_scale() args = self._get_args(self.properties) policy = asclient.add_policy(**args) resource_id = '%s:%s' % (self.properties[self.GROUP], policy.id) self.resource_id_set(resource_id) def _get_policy_id(self): return self.resource_id.split(':', 1)[1] def handle_update(self, json_snippet, tmpl_diff, prop_diff): asclient = self.auto_scale() props = json_snippet.properties(self.properties_schema, self.context) args = self._get_args(props) args['policy'] = self._get_policy_id() asclient.replace_policy(**args) def handle_delete(self): """Delete the policy if it exists.""" asclient = self.auto_scale() if self.resource_id is None: return policy_id = self._get_policy_id() try: asclient.delete_policy(self.properties[self.GROUP], policy_id) except NotFound: pass def auto_scale(self): return self.client('auto_scale') class WebHook(resource.Resource): """Represents a Rackspace AutoScale webhook. Exposes the URLs of the webhook as attributes. """ support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) PROPERTIES = ( POLICY, NAME, METADATA, ) = ( 'policy', 'name', 'metadata', ) ATTRIBUTES = ( EXECUTE_URL, CAPABILITY_URL, ) = ( 'executeUrl', 'capabilityUrl', ) properties_schema = { POLICY: properties.Schema( properties.Schema.STRING, _('The policy that this webhook should apply to, in ' '{group_id}:{policy_id} format. Generally a Ref to a Policy ' 'resource.'), required=True ), NAME: properties.Schema( properties.Schema.STRING, _('The name of this webhook.'), required=True, update_allowed=True ), METADATA: properties.Schema( properties.Schema.MAP, _('Arbitrary key/value metadata for this webhook.'), update_allowed=True ), } attributes_schema = { EXECUTE_URL: attributes.Schema( _("The url for executing the webhook (requires auth)."), cache_mode=attributes.Schema.CACHE_NONE ), CAPABILITY_URL: attributes.Schema( _("The url for executing the webhook (doesn't require auth)."), cache_mode=attributes.Schema.CACHE_NONE ), } def _get_args(self, props): group_id, policy_id = props[self.POLICY].split(':', 1) return dict( name=props[self.NAME], scaling_group=group_id, policy=policy_id, metadata=props.get(self.METADATA)) def handle_create(self): asclient = self.auto_scale() args = self._get_args(self.properties) webhook = asclient.add_webhook(**args) self.resource_id_set(webhook.id) for link in webhook.links: rel_to_key = {'self': 'executeUrl', 'capability': 'capabilityUrl'} key = rel_to_key.get(link['rel']) if key is not None: url = link['href'].encode('utf-8') self.data_set(key, url) def handle_update(self, json_snippet, tmpl_diff, prop_diff): asclient = self.auto_scale() args = self._get_args(json_snippet.properties(self.properties_schema, self.context)) args['webhook'] = self.resource_id asclient.replace_webhook(**args) def _resolve_attribute(self, key): v = self.data().get(key) if v is not None: return v.decode('utf-8') else: return None def handle_delete(self): if self.resource_id is None: return asclient = self.auto_scale() group_id, policy_id = self.properties[self.POLICY].split(':', 1) try: asclient.delete_webhook(group_id, policy_id, self.resource_id) except NotFound: pass def auto_scale(self): return self.client('auto_scale') def resource_mapping(): return { 'Rackspace::AutoScale::Group': Group, 'Rackspace::AutoScale::ScalingPolicy': ScalingPolicy, 'Rackspace::AutoScale::WebHook': WebHook } def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/contrib/rackspace/rackspace/resources/__init__.py0000666000175100017510000000000013245511554024205 0ustar zuulzuul00000000000000heat-10.0.0/contrib/rackspace/rackspace/resources/cloudnetworks.py0000666000175100017510000001114513245511554025365 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import six from heat.common.i18n import _ from heat.engine import attributes from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support try: from pyrax.exceptions import NetworkInUse # noqa from pyrax.exceptions import NotFound # noqa PYRAX_INSTALLED = True except ImportError: PYRAX_INSTALLED = False class NotFound(Exception): """Dummy pyrax exception - only used for testing.""" class NetworkInUse(Exception): """Dummy pyrax exception - only used for testing.""" LOG = logging.getLogger(__name__) class CloudNetwork(resource.Resource): """A resource for creating Rackspace Cloud Networks. See http://www.rackspace.com/cloud/networks/ for service documentation. """ support_status = support.SupportStatus( status=support.HIDDEN, version='6.0.0', previous_status=support.SupportStatus( status=support.DEPRECATED, message=_('Use OS::Neutron::Net instead.'), version='2015.1', previous_status=support.SupportStatus(version='2014.1') ) ) PROPERTIES = ( LABEL, CIDR ) = ( "label", "cidr" ) ATTRIBUTES = ( CIDR_ATTR, LABEL_ATTR, ) = ( 'cidr', 'label', ) properties_schema = { LABEL: properties.Schema( properties.Schema.STRING, _("The name of the network."), required=True, constraints=[ constraints.Length(min=3, max=64) ] ), CIDR: properties.Schema( properties.Schema.STRING, _("The IP block from which to allocate the network. For example, " "172.16.0.0/24 or 2001:DB8::/64."), required=True, constraints=[ constraints.CustomConstraint('net_cidr') ] ) } attributes_schema = { CIDR_ATTR: attributes.Schema( _("The CIDR for an isolated private network.") ), LABEL_ATTR: attributes.Schema( _("The name of the network.") ), } def __init__(self, name, json_snippet, stack): resource.Resource.__init__(self, name, json_snippet, stack) self._network = None self._delete_issued = False def network(self): if self.resource_id and not self._network: try: self._network = self.cloud_networks().get(self.resource_id) except NotFound: LOG.warning("Could not find network %s but resource id is" " set.", self.resource_id) return self._network def cloud_networks(self): return self.client('cloud_networks') def handle_create(self): cnw = self.cloud_networks().create(label=self.properties[self.LABEL], cidr=self.properties[self.CIDR]) self.resource_id_set(cnw.id) def handle_check(self): self.cloud_networks().get(self.resource_id) def check_delete_complete(self, cookie): if not self.resource_id: return True try: network = self.cloud_networks().get(self.resource_id) except NotFound: return True if not network: return True if not self._delete_issued: try: network.delete() except NetworkInUse: LOG.warning("Network '%s' still in use.", network.id) else: self._delete_issued = True return False return False def validate(self): super(CloudNetwork, self).validate() def _resolve_attribute(self, name): net = self.network() if net: return six.text_type(getattr(net, name)) return "" def resource_mapping(): return {'Rackspace::Cloud::Network': CloudNetwork} def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/contrib/rackspace/rackspace/resources/cloud_server.py0000666000175100017510000002725013245511554025162 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.nova import server from heat.engine import support try: import pyrax # noqa PYRAX_INSTALLED = True except ImportError: PYRAX_INSTALLED = False LOG = logging.getLogger(__name__) class CloudServer(server.Server): """Resource for Rackspace Cloud Servers. This resource overloads existent integrated OS::Nova::Server resource and is used for Rackspace Cloud Servers. """ support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) # Rackspace Cloud automation statuses SM_STATUS_IN_PROGRESS = 'In Progress' SM_STATUS_COMPLETE = 'Complete' SM_STATUS_BUILD_ERROR = 'Build Error' # RackConnect automation statuses RC_STATUS_DEPLOYING = 'DEPLOYING' RC_STATUS_DEPLOYED = 'DEPLOYED' RC_STATUS_FAILED = 'FAILED' RC_STATUS_UNPROCESSABLE = 'UNPROCESSABLE' # Nova Extra specs FLAVOR_EXTRA_SPECS = 'OS-FLV-WITH-EXT-SPECS:extra_specs' FLAVOR_CLASSES_KEY = 'flavor_classes' FLAVOR_ACCEPT_ANY = '*' FLAVOR_CLASS = 'class' DISK_IO_INDEX = 'disk_io_index' FLAVOR_CLASSES = ( GENERAL1, MEMORY1, PERFORMANCE2, PERFORMANCE1, STANDARD1, IO1, ONMETAL, COMPUTE1 ) = ( 'general1', 'memory1', 'performance2', 'performance1', 'standard1', 'io1', 'onmetal', 'compute1', ) BASE_IMAGE_REF = 'base_image_ref' # flavor classes that can be booted ONLY from volume BFV_VOLUME_REQUIRED = {MEMORY1, COMPUTE1} # flavor classes that can NOT be booted from volume NON_BFV = {STANDARD1, ONMETAL} properties_schema = copy.deepcopy(server.Server.properties_schema) properties_schema.update( { server.Server.USER_DATA_FORMAT: properties.Schema( properties.Schema.STRING, _('How the user_data should be formatted for the server. ' 'For RAW the user_data is passed to Nova unmodified. ' 'For SOFTWARE_CONFIG user_data is bundled as part of the ' 'software config data, and metadata is derived from any ' 'associated SoftwareDeployment resources.'), default=server.Server.RAW, constraints=[ constraints.AllowedValues([ server.Server.RAW, server.Server.SOFTWARE_CONFIG ]) ] ), } ) properties_schema.update( { server.Server.SOFTWARE_CONFIG_TRANSPORT: properties.Schema( properties.Schema.STRING, _('How the server should receive the metadata required for ' 'software configuration. POLL_TEMP_URL is the only ' 'supported transport on Rackspace Cloud. This property is ' 'retained for compatibility.'), default=server.Server.POLL_TEMP_URL, update_allowed=True, constraints=[ constraints.AllowedValues([ server.Server.POLL_TEMP_URL ]) ] ), } ) def __init__(self, name, json_snippet, stack): super(CloudServer, self).__init__(name, json_snippet, stack) self._managed_cloud_started_event_sent = False self._rack_connect_started_event_sent = False def _config_drive(self): user_data_format = self.properties[self.USER_DATA_FORMAT] is_sw_config = user_data_format == self.SOFTWARE_CONFIG user_data = self.properties.get(self.USER_DATA) config_drive = self.properties.get(self.CONFIG_DRIVE) if config_drive or is_sw_config or user_data: return True else: return False def _check_rax_automation_complete(self, server): if not self._managed_cloud_started_event_sent: msg = _("Waiting for Rackspace Cloud automation to complete") self._add_event(self.action, self.status, msg) self._managed_cloud_started_event_sent = True if 'rax_service_level_automation' not in server.metadata: LOG.debug("Cloud server does not have the " "rax_service_level_automation metadata tag yet") return False mc_status = server.metadata['rax_service_level_automation'] LOG.debug("Rackspace Cloud automation status: %s" % mc_status) if mc_status == self.SM_STATUS_IN_PROGRESS: return False elif mc_status == self.SM_STATUS_COMPLETE: msg = _("Rackspace Cloud automation has completed") self._add_event(self.action, self.status, msg) return True elif mc_status == self.SM_STATUS_BUILD_ERROR: raise exception.Error(_("Rackspace Cloud automation failed")) else: raise exception.Error(_("Unknown Rackspace Cloud automation " "status: %s") % mc_status) def _check_rack_connect_complete(self, server): if not self._rack_connect_started_event_sent: msg = _("Waiting for RackConnect automation to complete") self._add_event(self.action, self.status, msg) self._rack_connect_started_event_sent = True if 'rackconnect_automation_status' not in server.metadata: LOG.debug("RackConnect server does not have the " "rackconnect_automation_status metadata tag yet") return False rc_status = server.metadata['rackconnect_automation_status'] LOG.debug("RackConnect automation status: %s" % rc_status) if rc_status == self.RC_STATUS_DEPLOYING: return False elif rc_status == self.RC_STATUS_DEPLOYED: self._server = None # The public IP changed, forget old one return True elif rc_status == self.RC_STATUS_UNPROCESSABLE: # UNPROCESSABLE means the RackConnect automation was not # attempted (eg. Cloud Server in a different DC than # dedicated gear, so RackConnect does not apply). It is # okay if we do not raise an exception. reason = server.metadata.get('rackconnect_unprocessable_reason', None) if reason is not None: LOG.warning("RackConnect unprocessable reason: %s", reason) msg = _("RackConnect automation has completed") self._add_event(self.action, self.status, msg) return True elif rc_status == self.RC_STATUS_FAILED: raise exception.Error(_("RackConnect automation FAILED")) else: msg = _("Unknown RackConnect automation status: %s") % rc_status raise exception.Error(msg) def check_create_complete(self, server_id): """Check if server creation is complete and handle server configs.""" if not super(CloudServer, self).check_create_complete(server_id): return False server = self.client_plugin().fetch_server(server_id) if not server: return False if ('rack_connect' in self.context.roles and not self._check_rack_connect_complete(server)): return False if not self._check_rax_automation_complete(server): return False return True # Since rackspace compute service does not support 'os-interface' endpoint, # accessing addresses attribute of OS::Nova::Server results in NotFound # error. Here overrdiing '_add_port_for_address' method and using different # endpoint named 'os-virtual-interfacesv2' to get the same information. def _add_port_for_address(self, server): def get_port(net_name, address): for iface in ifaces: for ip_addr in iface.ip_addresses: if ip_addr['network_label'] == net_name and ip_addr[ 'address'] == address: return iface.id nets = copy.deepcopy(server.addresses) nova_ext = self.client().os_virtual_interfacesv2_python_novaclient_ext ifaces = nova_ext.list(server.id) for net_name, addresses in nets.items(): for address in addresses: address['port'] = get_port(net_name, address['addr']) return self._extend_networks(nets) def _base_image_obj(self, image): image_obj = self.client_plugin('glance').get_image(image) if self.BASE_IMAGE_REF in image_obj: base_image = image_obj[self.BASE_IMAGE_REF] return self.client_plugin('glance').get_image(base_image) return image_obj def _image_flavor_class_match(self, flavor_type, image): base_image_obj = self._base_image_obj(image) flavor_class_string = base_image_obj.get(self.FLAVOR_CLASSES_KEY) # If the flavor_class_string metadata does not exist or is # empty, do not validate image/flavor combo if not flavor_class_string: return True flavor_class_excluded = "!{0}".format(flavor_type) flavor_classes_accepted = flavor_class_string.split(',') if flavor_type in flavor_classes_accepted: return True if (self.FLAVOR_ACCEPT_ANY in flavor_classes_accepted and flavor_class_excluded not in flavor_classes_accepted): return True return False def validate(self): """Validate for Rackspace Cloud specific parameters""" super(CloudServer, self).validate() # check if image, flavor combination is valid flavor = self.properties[self.FLAVOR] flavor_obj = self.client_plugin().get_flavor(flavor) fl_xtra_specs = flavor_obj.to_dict().get(self.FLAVOR_EXTRA_SPECS, {}) flavor_type = fl_xtra_specs.get(self.FLAVOR_CLASS, None) image = self.properties.get(self.IMAGE) if not image: if flavor_type in self.NON_BFV: msg = _('Flavor %s cannot be booted from volume.') % flavor raise exception.StackValidationFailed(message=msg) else: # we cannot determine details of the attached volume, so this # is all the validation possible return if not self._image_flavor_class_match(flavor_type, image): msg = _('Flavor %(flavor)s cannot be used with image ' '%(image)s.') % {'image': image, 'flavor': flavor} raise exception.StackValidationFailed(message=msg) if flavor_type in self.BFV_VOLUME_REQUIRED: msg = _('Flavor %(flavor)s must be booted from volume, ' 'but image %(image)s was also specified.') % { 'flavor': flavor, 'image': image} raise exception.StackValidationFailed(message=msg) def resource_mapping(): return {'OS::Nova::Server': CloudServer} def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/contrib/rackspace/rackspace/resources/cloud_dns.py0000666000175100017510000001672613245511554024446 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Resources for Rackspace DNS.""" from oslo_log import log as logging from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource from heat.engine import support try: from pyrax.exceptions import NotFound PYRAX_INSTALLED = True except ImportError: # Setup fake exception for testing without pyrax class NotFound(Exception): pass PYRAX_INSTALLED = False LOG = logging.getLogger(__name__) class CloudDns(resource.Resource): """Represents a DNS resource.""" support_status = support.SupportStatus( status=support.UNSUPPORTED, message=_('This resource is not supported, use at your own risk.')) PROPERTIES = ( NAME, EMAIL_ADDRESS, TTL, COMMENT, RECORDS, ) = ( 'name', 'emailAddress', 'ttl', 'comment', 'records', ) _RECORD_KEYS = ( RECORD_COMMENT, RECORD_NAME, RECORD_DATA, RECORD_PRIORITY, RECORD_TTL, RECORD_TYPE, ) = ( 'comment', 'name', 'data', 'priority', 'ttl', 'type', ) properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Specifies the name for the domain or subdomain. Must be a ' 'valid domain name.'), required=True, constraints=[ constraints.Length(min=3), ] ), EMAIL_ADDRESS: properties.Schema( properties.Schema.STRING, _('Email address to use for contacting the domain administrator.'), required=True, update_allowed=True ), TTL: properties.Schema( properties.Schema.INTEGER, _('How long other servers should cache recorddata.'), default=3600, constraints=[ constraints.Range(min=300), ], update_allowed=True ), COMMENT: properties.Schema( properties.Schema.STRING, _('Optional free form text comment'), constraints=[ constraints.Length(max=160), ], update_allowed=True ), RECORDS: properties.Schema( properties.Schema.LIST, _('Domain records'), schema=properties.Schema( properties.Schema.MAP, schema={ RECORD_COMMENT: properties.Schema( properties.Schema.STRING, _('Optional free form text comment'), constraints=[ constraints.Length(max=160), ] ), RECORD_NAME: properties.Schema( properties.Schema.STRING, _('Specifies the name for the domain or ' 'subdomain. Must be a valid domain name.'), required=True, constraints=[ constraints.Length(min=3), ] ), RECORD_DATA: properties.Schema( properties.Schema.STRING, _('Type specific record data'), required=True ), RECORD_PRIORITY: properties.Schema( properties.Schema.INTEGER, _('Required for MX and SRV records, but ' 'forbidden for other record types. If ' 'specified, must be an integer from 0 to ' '65535.'), constraints=[ constraints.Range(0, 65535), ] ), RECORD_TTL: properties.Schema( properties.Schema.INTEGER, _('How long other servers should cache ' 'recorddata.'), default=3600, constraints=[ constraints.Range(min=300), ] ), RECORD_TYPE: properties.Schema( properties.Schema.STRING, _('Specifies the record type.'), required=True, constraints=[ constraints.AllowedValues(['A', 'AAAA', 'NS', 'MX', 'CNAME', 'TXT', 'SRV']), ] ), }, ), update_allowed=True ), } def cloud_dns(self): return self.client('cloud_dns') def handle_create(self): """Create a Rackspace CloudDns Instance.""" # There is no check_create_complete as the pyrax create for DNS is # synchronous. LOG.debug("CloudDns handle_create called.") args = dict((k, v) for k, v in self.properties.items()) for rec in args[self.RECORDS] or {}: # only pop the priority for the correct types rec_type = rec[self.RECORD_TYPE] if (rec_type != 'MX') and (rec_type != 'SRV'): rec.pop(self.RECORD_PRIORITY, None) dom = self.cloud_dns().create(**args) self.resource_id_set(dom.id) def handle_check(self): self.cloud_dns().get(self.resource_id) def handle_update(self, json_snippet, tmpl_diff, prop_diff): """Update a Rackspace CloudDns Instance.""" LOG.debug("CloudDns handle_update called.") if not self.resource_id: raise exception.Error(_('Update called on a non-existent domain')) if prop_diff: dom = self.cloud_dns().get(self.resource_id) # handle records separately records = prop_diff.pop(self.RECORDS, {}) if prop_diff: # Handle top level domain properties dom.update(**prop_diff) # handle records if records: recs = dom.list_records() # 1. delete all the current records other than rackspace NS records [rec.delete() for rec in recs if rec.type != 'NS' or 'stabletransit.com' not in rec.data] # 2. update with the new records in prop_diff dom.add_records(records) def handle_delete(self): """Delete a Rackspace CloudDns Instance.""" LOG.debug("CloudDns handle_delete called.") if self.resource_id: try: dom = self.cloud_dns().get(self.resource_id) dom.delete() except NotFound: pass def resource_mapping(): return {'Rackspace::Cloud::DNS': CloudDns} def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/contrib/rackspace/rackspace/resources/lb_node.py0000666000175100017510000001452313245511554024067 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils import timeutils import six from heat.common import exception from heat.common.i18n import _ from heat.engine import constraints from heat.engine import properties from heat.engine import resource try: from pyrax.exceptions import NotFound # noqa PYRAX_INSTALLED = True except ImportError: # Setup fake exception for testing without pyrax class NotFound(Exception): pass PYRAX_INSTALLED = False def lb_immutable(exc): return 'immutable' in six.text_type(exc) class LoadbalancerDeleted(exception.HeatException): msg_fmt = _("The Load Balancer (ID %(lb_id)s) has been deleted.") class NodeNotFound(exception.HeatException): msg_fmt = _("Node (ID %(node_id)s) not found on Load Balancer " "(ID %(lb_id)s).") class LBNode(resource.Resource): """Represents a single node of a Rackspace Cloud Load Balancer""" default_client_name = 'cloud_lb' _CONDITIONS = ( ENABLED, DISABLED, DRAINING, ) = ( 'ENABLED', 'DISABLED', 'DRAINING', ) _NODE_KEYS = ( ADDRESS, PORT, CONDITION, TYPE, WEIGHT ) = ( 'address', 'port', 'condition', 'type', 'weight' ) _OTHER_KEYS = ( LOAD_BALANCER, DRAINING_TIMEOUT ) = ( 'load_balancer', 'draining_timeout' ) PROPERTIES = _NODE_KEYS + _OTHER_KEYS properties_schema = { LOAD_BALANCER: properties.Schema( properties.Schema.STRING, _("The ID of the load balancer to associate the node with."), required=True ), DRAINING_TIMEOUT: properties.Schema( properties.Schema.INTEGER, _("The time to wait, in seconds, for the node to drain before it " "is deleted."), default=0, constraints=[ constraints.Range(min=0) ], update_allowed=True ), ADDRESS: properties.Schema( properties.Schema.STRING, _("IP address for the node."), required=True ), PORT: properties.Schema( properties.Schema.INTEGER, required=True ), CONDITION: properties.Schema( properties.Schema.STRING, default=ENABLED, constraints=[ constraints.AllowedValues(_CONDITIONS), ], update_allowed=True ), TYPE: properties.Schema( properties.Schema.STRING, constraints=[ constraints.AllowedValues(['PRIMARY', 'SECONDARY']), ], update_allowed=True ), WEIGHT: properties.Schema( properties.Schema.NUMBER, constraints=[ constraints.Range(1, 100), ], update_allowed=True ), } def lb(self): lb_id = self.properties.get(self.LOAD_BALANCER) lb = self.client().get(lb_id) if lb.status in ('DELETED', 'PENDING_DELETE'): raise LoadbalancerDeleted(lb_id=lb.id) return lb def node(self, lb): for node in getattr(lb, 'nodes', []): if node.id == self.resource_id: return node raise NodeNotFound(node_id=self.resource_id, lb_id=lb.id) def handle_create(self): pass def check_create_complete(self, *args): node_args = {k: self.properties.get(k) for k in self._NODE_KEYS} node = self.client().Node(**node_args) try: resp, body = self.lb().add_nodes([node]) except Exception as exc: if lb_immutable(exc): return False raise new_node = body['nodes'][0] node_id = new_node['id'] self.resource_id_set(node_id) return True def handle_update(self, json_snippet, tmpl_diff, prop_diff): return prop_diff def check_update_complete(self, prop_diff): node = self.node(self.lb()) is_complete = True for key in self._NODE_KEYS: if key in prop_diff and getattr(node, key, None) != prop_diff[key]: setattr(node, key, prop_diff[key]) is_complete = False if is_complete: return True try: node.update() except Exception as exc: if lb_immutable(exc): return False raise return False def handle_delete(self): return timeutils.utcnow() def check_delete_complete(self, deleted_at): if self.resource_id is None: return True try: node = self.node(self.lb()) except (NotFound, LoadbalancerDeleted, NodeNotFound): return True if isinstance(deleted_at, six.string_types): deleted_at = timeutils.parse_isotime(deleted_at) deleted_at = timeutils.normalize_time(deleted_at) waited = timeutils.utcnow() - deleted_at timeout_secs = self.properties[self.DRAINING_TIMEOUT] timeout_secs = datetime.timedelta(seconds=timeout_secs) if waited > timeout_secs: try: node.delete() except NotFound: return True except Exception as exc: if lb_immutable(exc): return False raise elif node.condition != self.DRAINING: node.condition = self.DRAINING try: node.update() except Exception as exc: if lb_immutable(exc): return False raise return False def resource_mapping(): return {'Rackspace::Cloud::LBNode': LBNode} def available_resource_mapping(): if PYRAX_INSTALLED: return resource_mapping() return {} heat-10.0.0/doc/0000775000175100017510000000000013245512113013277 5ustar zuulzuul00000000000000heat-10.0.0/doc/README.rst0000666000175100017510000000135513245511546015005 0ustar zuulzuul00000000000000=========================== Building the developer docs =========================== For user and admin docs, go to the directory `doc/docbkx`. Dependencies ============ You'll need to install python *Sphinx* package and *oslosphinx* package: :: sudo pip install sphinx oslosphinx If you are using the virtualenv you'll need to install them in the virtualenv. Get Help ======== Just type make to get help: :: make It will list available build targets. Build Doc ========= To build the man pages: :: make man To build the developer documentation as HTML: :: make html Type *make* for more formats. Test Doc ======== If you modify doc files, you can type: :: make doctest to check whether the format has problem. heat-10.0.0/doc/source/0000775000175100017510000000000013245512113014577 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/man/0000775000175100017510000000000013245512113015352 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/man/heat-engine.rst0000666000175100017510000000204313245511546020302 0ustar zuulzuul00000000000000=========== heat-engine =========== .. program:: heat-engine SYNOPSIS ======== ``heat-engine [options]`` DESCRIPTION =========== heat-engine is the heat project server with an internal RPC api called by the heat-api server. INVENTORY ========= The heat-engine does all the orchestration work and is the layer in which the resource integration is implemented. OPTIONS ======= .. cmdoption:: --config-file Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. .. cmdoption:: --config-dir Path to a config directory to pull .conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s), if any, specified via --config-file, hence over-ridden options in the directory take precedence. .. cmdoption:: --version Show program's version number and exit. The output could be empty if the distribution didn't specify any version information. FILES ======== * /etc/heat/heat.conf heat-10.0.0/doc/source/man/heat-manage.rst0000666000175100017510000000374213245511546020274 0ustar zuulzuul00000000000000=========== heat-manage =========== .. program:: heat-manage SYNOPSIS ======== ``heat-manage [options]`` DESCRIPTION =========== heat-manage helps manage heat specific database operations. OPTIONS ======= The standard pattern for executing a heat-manage command is: ``heat-manage []`` Run with -h to see a list of available commands: ``heat-manage -h`` Commands are ``db_version``, ``db_sync``, ``purge_deleted``, ``migrate_convergence_1``, ``migrate_properties_data``, and ``service``. Detailed descriptions are below. ``heat-manage db_version`` Print out the db schema version. ``heat-manage db_sync`` Sync the database up to the most recent version. ``heat-manage purge_deleted [-g {days,hours,minutes,seconds}] [-p project_id] [age]`` Purge db entries marked as deleted and older than [age]. When project_id argument is provided, only entries belonging to this project will be purged. ``heat-manage migrate_properties_data`` Migrates properties data from the legacy locations in the db (resource.properties_data and event.resource_properties) to the modern location, the resource_properties_data table. ``heat-manage migrate_convergence_1 [stack_id]`` Migrates [stack_id] from non-convergence to convergence. This requires running convergence enabled heat engine(s) and can't be done when they are offline. ``heat-manage service list`` Shows details for all currently running heat-engines. ``heat-manage service clean`` Clean dead engine records. ``heat-manage --version`` Shows program's version number and exit. The output could be empty if the distribution didn't specify any version information. FILES ===== The /etc/heat/heat.conf file contains global options which can be used to configure some aspects of heat-manage, for example the DB connection and logging. BUGS ==== * Heat issues are tracked in Launchpad so you can view or report bugs here `OpenStack Heat Bugs `__ heat-10.0.0/doc/source/man/heat-api.rst0000666000175100017510000000205613245511546017612 0ustar zuulzuul00000000000000======== heat-api ======== .. program:: heat-api SYNOPSIS ======== ``heat-api [options]`` DESCRIPTION =========== heat-api provides an external REST API to the heat project. INVENTORY ========= heat-api is a service that exposes an external REST based api to the heat-engine service. The communication between the heat-api and heat-engine uses message queue based RPC. OPTIONS ======= .. cmdoption:: --config-file Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. .. cmdoption:: --config-dir Path to a config directory to pull .conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s), if any, specified via --config-file, hence over-ridden options in the directory take precedence. .. cmdoption:: --version Show program's version number and exit. The output could be empty if the distribution didn't specify any version information. FILES ======== * /etc/heat/heat.conf heat-10.0.0/doc/source/man/heat-keystone-setup.rst0000666000175100017510000000132013245511546022031 0ustar zuulzuul00000000000000=================== heat-keystone-setup =================== .. program:: heat-keystone-setup SYNOPSIS ======== ``heat-keystone-setup`` DESCRIPTION =========== Warning: This script is deprecated, please use other tool to setup keystone for heat. The ``heat-keystone-setup`` tool configures keystone for use with heat. This script requires admin keystone credentials to be available in the shell environment and write access to ``/etc/keystone``. Distributions may provide other tools to setup keystone for use with Heat, so check the distro documentation first. EXAMPLES ======== heat-keystone-setup BUGS ==== Heat bugs are managed through Launchpad `OpenStack Heat Bugs `__ heat-10.0.0/doc/source/man/heat-api-cfn.rst0000666000175100017510000000213313245511546020352 0ustar zuulzuul00000000000000============ heat-api-cfn ============ .. program:: heat-api-cfn SYNOPSIS ======== ``heat-api-cfn [options]`` DESCRIPTION =========== heat-api-cfn is a CloudFormation compatible API service to the heat project. INVENTORY ========= heat-api-cfn is a service that exposes an external REST based api to the heat-engine service. The communication between the heat-api-cfn and heat-engine uses message queue based RPC. OPTIONS ======= .. cmdoption:: --config-file Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. .. cmdoption:: --config-dir Path to a config directory to pull .conf files from. This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s), if any, specified via --config-file, hence over-ridden options in the directory take precedence. .. cmdoption:: --version Show program's version number and exit. The output could be empty if the distribution didn't specify any version information. FILES ======== * /etc/heat/heat.conf heat-10.0.0/doc/source/man/heat-db-setup.rst0000666000175100017510000000313313245511546020561 0ustar zuulzuul00000000000000============= heat-db-setup ============= .. program:: heat-db-setup SYNOPSIS ======== ``heat-db-setup [COMMANDS] [OPTIONS]`` DESCRIPTION =========== heat-db-setup is a tool which configures the local MySQL database for heat. Typically distro-specific tools would provide this functionality so please read the distro-specific documentation for configuring heat. COMMANDS ======== ``rpm`` Indicate the distribution is a RPM packaging based distribution. ``deb`` Indicate the distribution is a DEB packaging based distribution. OPTIONS ======= .. cmdoption:: -h, --help Print usage information. .. cmdoption:: -p, --password Specify the password for the 'heat' MySQL user that the script will use to connect to the 'heat' MySQL database. By default, the password 'heat' will be used. .. cmdoption:: -r, --rootpw Specify the root MySQL password. If the script installs the MySQL server, it will set the root password to this value instead of prompting for a password. If the MySQL server is already installed, this password will be used to connect to the database instead of having to prompt for it. .. cmdoption:: -y, --yes In cases where the script would normally ask for confirmation before doing something, such as installing mysql-server, just assume yes. This is useful if you want to run the script non-interactively. EXAMPLES ======== heat-db-setup rpm -p heat_password -r mysql_pwd -y heat-db-setup deb -p heat_password -r mysql_pwd -y heat-db-setup rpm BUGS ==== Heat bugs are managed through Launchpad `OpenStack Heat Bugs `__ heat-10.0.0/doc/source/man/index.rst0000666000175100017510000000060613245511546017230 0ustar zuulzuul00000000000000==================================== Man pages for services and utilities ==================================== ------------- Heat services ------------- .. toctree:: :maxdepth: 2 heat-engine heat-api heat-api-cfn -------------- Heat utilities -------------- .. toctree:: :maxdepth: 2 heat-manage heat-db-setup heat-keystone-setup heat-keystone-setup-domain heat-10.0.0/doc/source/man/heat-keystone-setup-domain.rst0000666000175100017510000000663313245511546023312 0ustar zuulzuul00000000000000========================== heat-keystone-setup-domain ========================== .. program:: heat-keystone-setup-domain SYNOPSIS ======== ``heat-keystone-setup-domain [OPTIONS]`` DESCRIPTION =========== The `heat-keystone-setup-domain` tool configures keystone by creating a 'stack user domain' and the user credential used to manage this domain. A 'stack user domain' can be treated as a namespace for projects, groups and users created by heat. The domain will have an admin user that manages other users, groups and projects in the domain. This script requires admin keystone credentials to be available in the shell environment by setting `OS_USERNAME` and `OS_PASSWORD`. After running this script, a user needs to take actions to check or modify the heat configuration file (e.g. /etc/heat/heat.conf). The tool is NOT performing these updates on behalf of the user. Distributions may provide other tools to setup 'stack user domain' for use with heat, so check the distro documentation first. Other tools are available to set up the 'stack user domain', for example `python-openstackclient`, which is preferred to this tool where it is available. OPTIONS ======= .. cmdoption:: -h, --help Print usage information. .. cmdoption:: --config-dir Path to a config directory from which to read the ``heat.conf`` file(s). This file set is sorted, so as to provide a predictable parse order if individual options are over-ridden. The set is parsed after the file(s) specified via previous --config-file, arguments hence over-ridden options in the directory take precedence. .. cmdoption:: --config-file Path to a config file to use. Multiple config files can be specified, with values in later files taking precedence. The default files used is `/etc/heat/heat.conf`. .. cmdoption:: --stack-domain-admin Name of a user for Keystone to create, which has roles sufficient to manage users (i.e. stack domain users) and projects (i.e. stack domain projects) in the 'stack user domain'. Another way to specify the admin user name is by setting an environment variable `STACK_DOMAIN_ADMIN` before running this tool. If both command line arguments and environment variable are specified, the command line arguments take precedence. .. cmdoption:: --stack-domain-admin-password Password for the 'stack-domain-admin' user. The password can be instead specified using an environment variable `STACK_DOMAIN_ADMIN_PASSWORD` before invoking this tool. If both command line arguments and environment variable are specified, the command line arguments take precedence. .. cmdoption:: --stack-user-domain-name Name of domain to create for stack users. The domain name can be instead specified using an environment variable `STACK_USER_DOMAIN_NAME` before invoking this tool. If both command line arguments and environment variable are specified, the command line argument take precedence. .. cmdoption:: --version Show program's version number and exit. The output could be empty if the distribution didn't specify any version information. EXAMPLES ======== heat-keystone-setup-domain heat-keystone-setup-domain --stack-user-domain-name heat_user_domain \ --stack-domain-admin heat_domain_admin \ --stack-domain-admin-password verysecrete BUGS ==== Heat bugs are managed through Launchpad `OpenStack Heat Bugs `__ heat-10.0.0/doc/source/api/0000775000175100017510000000000013245512113015350 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/api/index.rst0000666000175100017510000000014713245511546017226 0ustar zuulzuul00000000000000=================== Source Code Index =================== .. toctree:: :maxdepth: 1 autoindex heat-10.0.0/doc/source/configuration/0000775000175100017510000000000013245512113017446 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/configuration/clients.rst0000666000175100017510000000215213245511546021654 0ustar zuulzuul00000000000000================= Configure clients ================= The following options allow configuration of the clients that Orchestration uses to talk to other services. .. include:: ./tables/heat-clients.rst .. include:: ./tables/heat-clients_backends.rst .. include:: ./tables/heat-clients_aodh.rst .. include:: ./tables/heat-clients_barbican.rst .. include:: ./tables/heat-clients_ceilometer.rst .. include:: ./tables/heat-clients_cinder.rst .. include:: ./tables/heat-clients_designate.rst .. include:: ./tables/heat-clients_glance.rst .. include:: ./tables/heat-clients_heat.rst .. include:: ./tables/heat-clients_keystone.rst .. include:: ./tables/heat-clients_magnum.rst .. include:: ./tables/heat-clients_manila.rst .. include:: ./tables/heat-clients_mistral.rst .. include:: ./tables/heat-clients_monasca.rst .. include:: ./tables/heat-clients_neutron.rst .. include:: ./tables/heat-clients_nova.rst .. include:: ./tables/heat-clients_sahara.rst .. include:: ./tables/heat-clients_senlin.rst .. include:: ./tables/heat-clients_swift.rst .. include:: ./tables/heat-clients_trove.rst .. include:: ./tables/heat-clients_zaqar.rst heat-10.0.0/doc/source/configuration/tables/0000775000175100017510000000000013245512113020720 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/configuration/tables/heat-redis.rst0000666000175100017510000000312713245511546023515 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-redis: .. list-table:: Description of Redis configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[matchmaker_redis]** - * - ``check_timeout`` = ``20000`` - (Integer) Time in ms to wait before the transaction is killed. * - ``host`` = ``127.0.0.1`` - (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url * - ``password`` = - (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url * - ``port`` = ``6379`` - (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url * - ``sentinel_group_name`` = ``oslo-messaging-zeromq`` - (String) Redis replica set name. * - ``sentinel_hosts`` = - (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url * - ``socket_timeout`` = ``10000`` - (Integer) Timeout in ms on blocking socket operations. * - ``wait_timeout`` = ``2000`` - (Integer) Time in ms to wait between connection attempts. heat-10.0.0/doc/source/configuration/tables/heat-loadbalancer.rst0000666000175100017510000000142313245511546025013 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-loadbalancer: .. list-table:: Description of load balancer configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``loadbalancer_template`` = ``None`` - (String) Custom template for the built-in loadbalancer nested stack. heat-10.0.0/doc/source/configuration/tables/heat-metadata_api.rst0000666000175100017510000000155613245511546025024 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-metadata_api: .. list-table:: Description of metadata API configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``heat_metadata_server_url`` = ``None`` - (String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog heat-10.0.0/doc/source/configuration/tables/heat-testing.rst0000666000175100017510000000511013245511546024056 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-testing: .. list-table:: Description of testing configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[profiler]** - * - ``connection_string`` = ``messaging://`` - (String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications. * - ``enabled`` = ``False`` - (Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: * True: Enables the feature * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. * - ``hmac_keys`` = ``SECRET_KEY`` - (String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: [,,...], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. * - ``trace_sqlalchemy`` = ``False`` - (Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. heat-10.0.0/doc/source/configuration/tables/heat-clients_magnum.rst0000666000175100017510000000231713245511546025414 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_magnum: .. list-table:: Description of magnum clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_magnum]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-trustee.rst0000666000175100017510000000151313245511546024077 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-trustee: .. list-table:: Description of trustee configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[trustee]** - * - ``auth_section`` = ``None`` - (Unknown) Config Section from which to load plugin specific options * - ``auth_type`` = ``None`` - (Unknown) Authentication type to load heat-10.0.0/doc/source/configuration/tables/heat-clients_sahara.rst0000666000175100017510000000231713245511546025367 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_sahara: .. list-table:: Description of sahara clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_sahara]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-crypt.rst0000666000175100017510000000150513245511546023546 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-crypt: .. list-table:: Description of crypt configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``auth_encryption_key`` = ``notgood but just long enough i t`` - (String) Key used to encrypt authentication info in the database. Length of this key must be 32 characters. heat-10.0.0/doc/source/configuration/tables/heat-notification.rst0000666000175100017510000000132413245511546025072 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-notification: .. list-table:: Description of notification configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``onready`` = ``None`` - (String) Deprecated. heat-10.0.0/doc/source/configuration/tables/heat-clients_zaqar.rst0000666000175100017510000000231413245511546025243 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_zaqar: .. list-table:: Description of zaqar clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_zaqar]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_swift.rst0000666000175100017510000000231413245511546025261 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_swift: .. list-table:: Description of swift clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_swift]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_trove.rst0000666000175100017510000000231413245511546025264 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_trove: .. list-table:: Description of trove clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_trove]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_mistral.rst0000666000175100017510000000232213245511546025577 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_mistral: .. list-table:: Description of mistral clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_mistral]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-api.rst0000666000175100017510000001675713245511546023175 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-api: .. list-table:: Description of API configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``action_retry_limit`` = ``5`` - (Integer) Number of times to retry to bring a resource to a non-error state. Set to 0 to disable retries. * - ``enable_stack_abandon`` = ``False`` - (Boolean) Enable the preview Stack Abandon feature. * - ``enable_stack_adopt`` = ``False`` - (Boolean) Enable the preview Stack Adopt feature. * - ``encrypt_parameters_and_properties`` = ``False`` - (Boolean) Encrypt template parameters that were marked as hidden and also all the resource properties before storing them in database. * - ``heat_metadata_server_url`` = ``None`` - (String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog * - ``heat_stack_user_role`` = ``heat_stack_user`` - (String) Keystone role for heat template-defined users. * - ``heat_waitcondition_server_url`` = ``None`` - (String) URL of the Heat waitcondition server. * - ``heat_watch_server_url`` = - (String) URL of the Heat CloudWatch server. * - ``hidden_stack_tags`` = ``data-processing-cluster`` - (List) Stacks containing these tag names will be hidden. Multiple tags should be given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too). * - ``max_json_body_size`` = ``1048576`` - (Integer) Maximum raw byte size of JSON request body. Should be larger than max_template_size. * - ``num_engine_workers`` = ``None`` - (Integer) Number of heat-engine processes to fork and run. Will default to either to 4 or number of CPUs on the host, whichever is greater. * - ``observe_on_update`` = ``False`` - (Boolean) On update, enables heat to collect existing resource properties from reality and converge to updated template. * - ``stack_action_timeout`` = ``3600`` - (Integer) Timeout in seconds for stack action (ie. create or update). * - ``stack_domain_admin`` = ``None`` - (String) Keystone username, a user with roles sufficient to manage users and projects in the stack_user_domain. * - ``stack_domain_admin_password`` = ``None`` - (String) Keystone password for stack_domain_admin user. * - ``stack_scheduler_hints`` = ``False`` - (Boolean) When this feature is enabled, scheduler hints identifying the heat stack context of a server or volume resource are passed to the configured schedulers in nova and cinder, for creates done using heat resource types OS::Cinder::Volume, OS::Nova::Server, and AWS::EC2::Instance. heat_root_stack_id will be set to the id of the root stack of the resource, heat_stack_id will be set to the id of the resource's parent stack, heat_stack_name will be set to the name of the resource's parent stack, heat_path_in_stack will be set to a list of comma delimited strings of stackresourcename and stackname with list[0] being 'rootstackname', heat_resource_name will be set to the resource's name, and heat_resource_uuid will be set to the resource's orchestration id. * - ``stack_user_domain_id`` = ``None`` - (String) Keystone domain ID which contains heat template-defined users. If this option is set, stack_user_domain_name option will be ignored. * - ``stack_user_domain_name`` = ``None`` - (String) Keystone domain name which contains heat template-defined users. If `stack_user_domain_id` option is set, this option is ignored. * - ``stale_token_duration`` = ``30`` - (Integer) Gap, in seconds, to determine whether the given token is about to expire. * - ``trusts_delegated_roles`` = - (List) Subset of trustor roles to be delegated to heat. If left unset, all roles of a user will be delegated to heat when creating a stack. * - **[auth_password]** - * - ``allowed_auth_uris`` = - (List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. * - ``multi_cloud`` = ``False`` - (Boolean) Allow orchestration of multiple clouds. * - **[ec2authtoken]** - * - ``allowed_auth_uris`` = - (List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. * - ``auth_uri`` = ``None`` - (String) Authentication Endpoint URI. * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``insecure`` = ``False`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. * - ``multi_cloud`` = ``False`` - (Boolean) Allow orchestration of multiple clouds. * - **[eventlet_opts]** - * - ``client_socket_timeout`` = ``900`` - (Integer) Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of '0' means wait forever. * - ``wsgi_keep_alive`` = ``True`` - (Boolean) If False, closes the client socket connection explicitly. * - **[heat_api]** - * - ``backlog`` = ``4096`` - (Integer) Number of backlog requests to configure the socket with. * - ``bind_host`` = ``0.0.0.0`` - (IP) Address to bind the server. Useful when selecting a particular network interface. * - ``bind_port`` = ``8004`` - (Port number) The port on which the server will listen. * - ``cert_file`` = ``None`` - (String) Location of the SSL certificate file to use for SSL mode. * - ``key_file`` = ``None`` - (String) Location of the SSL key file to use for enabling SSL mode. * - ``max_header_line`` = ``16384`` - (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). * - ``tcp_keepidle`` = ``600`` - (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. * - ``workers`` = ``0`` - (Integer) Number of workers for Heat service. Default value 0 means, that service will start number of workers equal number of cores on server. * - **[oslo_middleware]** - * - ``enable_proxy_headers_parsing`` = ``False`` - (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. * - ``max_request_body_size`` = ``114688`` - (Integer) The maximum body size for each request, in bytes. * - **[oslo_versionedobjects]** - * - ``fatal_exception_format_errors`` = ``False`` - (Boolean) Make exception message format errors fatal * - **[paste_deploy]** - * - ``api_paste_config`` = ``api-paste.ini`` - (String) The API paste config file to use. * - ``flavor`` = ``None`` - (String) The flavor to use. heat-10.0.0/doc/source/configuration/tables/heat-clients.rst0000666000175100017510000000251713245511546024052 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients: .. list-table:: Description of clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``region_name_for_services`` = ``None`` - (String) Default region name used to get services endpoints. * - **[clients]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``publicURL`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``False`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_aodh.rst0000666000175100017510000000231113245511546025035 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_aodh: .. list-table:: Description of aodh clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_aodh]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_glance.rst0000666000175100017510000000231713245511546025361 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_glance: .. list-table:: Description of glance clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_glance]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_designate.rst0000666000175100017510000000233013245511546026066 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_designate: .. list-table:: Description of designate clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_designate]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-quota.rst0000666000175100017510000000305313245511546023536 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-quota: .. list-table:: Description of quota configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``max_events_per_stack`` = ``1000`` - (Integer) Rough number of maximum events that will be available per stack. Actual number of events can be a bit higher since purge checks take place randomly 200/event_purge_batch_size percent of the time. Older events are deleted when events are purged. Set to 0 for unlimited events per stack. * - ``max_nested_stack_depth`` = ``5`` - (Integer) Maximum depth allowed when using nested stacks. * - ``max_resources_per_stack`` = ``1000`` - (Integer) Maximum resources allowed per top-level stack. -1 stands for unlimited. * - ``max_server_name_length`` = ``53`` - (Integer) Maximum length of a server name to be used in nova. * - ``max_stacks_per_tenant`` = ``100`` - (Integer) Maximum number of stacks any one tenant may have active at one time. * - ``max_template_size`` = ``524288`` - (Integer) Maximum raw byte size of any template. heat-10.0.0/doc/source/configuration/tables/heat-clients_cinder.rst0000666000175100017510000000244713245511546025400 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_cinder: .. list-table:: Description of cinder clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_cinder]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``http_log_debug`` = ``False`` - (Boolean) Allow client's debug log output. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_ceilometer.rst0000666000175100017510000000233313245511546026256 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_ceilometer: .. list-table:: Description of ceilometer clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_ceilometer]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_backends.rst0000666000175100017510000000145413245511546025703 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_backends: .. list-table:: Description of client backends configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``cloud_backend`` = ``heat.engine.clients.OpenStackClients`` - (String) Fully qualified class name to use as a client backend. heat-10.0.0/doc/source/configuration/tables/heat-clients_monasca.rst0000666000175100017510000000232213245511546025545 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_monasca: .. list-table:: Description of monasca clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_monasca]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_neutron.rst0000666000175100017510000000232213245511546025616 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_neutron: .. list-table:: Description of neutron clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_neutron]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_keystone.rst0000666000175100017510000000247013245511546025771 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_keystone: .. list-table:: Description of keystone clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_keystone]** - * - ``auth_uri`` = - (String) Unversioned keystone url in format like http://0.0.0.0:5000. * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_senlin.rst0000666000175100017510000000231713245511546025420 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_senlin: .. list-table:: Description of senlin clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_senlin]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_barbican.rst0000666000175100017510000000232513245511546025670 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_barbican: .. list-table:: Description of barbican clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_barbican]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-cfn_api.rst0000666000175100017510000000377313245511546024015 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-cfn_api: .. list-table:: Description of Cloudformation-compatible API configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``instance_connection_https_validate_certificates`` = ``1`` - (String) Instance connection to CFN/CW API validate certs if SSL is used. * - ``instance_connection_is_secure`` = ``0`` - (String) Instance connection to CFN/CW API via https. * - **[heat_api_cfn]** - * - ``backlog`` = ``4096`` - (Integer) Number of backlog requests to configure the socket with. * - ``bind_host`` = ``0.0.0.0`` - (IP) Address to bind the server. Useful when selecting a particular network interface. * - ``bind_port`` = ``8000`` - (Port number) The port on which the server will listen. * - ``cert_file`` = ``None`` - (String) Location of the SSL certificate file to use for SSL mode. * - ``key_file`` = ``None`` - (String) Location of the SSL key file to use for enabling SSL mode. * - ``max_header_line`` = ``16384`` - (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). * - ``tcp_keepidle`` = ``600`` - (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. * - ``workers`` = ``1`` - (Integer) Number of workers for Heat service. heat-10.0.0/doc/source/configuration/tables/heat-clients_nova.rst0000666000175100017510000000244113245511546025071 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_nova: .. list-table:: Description of nova clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_nova]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``http_log_debug`` = ``False`` - (Boolean) Allow client's debug log output. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-clients_manila.rst0000666000175100017510000000231713245511546025371 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_manila: .. list-table:: Description of manila clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_manila]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. heat-10.0.0/doc/source/configuration/tables/heat-common.rst0000666000175100017510000002551113245511546023700 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-common: .. list-table:: Description of common configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``client_retry_limit`` = ``2`` - (Integer) Number of times to retry when a client encounters an expected intermittent error. Set to 0 to disable retries. * - ``convergence_engine`` = ``True`` - (Boolean) Enables engine with convergence architecture. All stacks with this option will be created using convergence engine. * - ``default_deployment_signal_transport`` = ``CFN_SIGNAL`` - (String) Template default for how the server should signal to heat with the deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT (requires object-store endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar queue to be signaled using the provided keystone credentials. * - ``default_software_config_transport`` = ``POLL_SERVER_CFN`` - (String) Template default for how the server should receive the metadata required for software configuration. POLL_SERVER_CFN will allow calls to the cfn API action DescribeStackResource authenticated with the provided keypair (requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the provided keystone credentials (requires keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL will create and populate a Swift TempURL with metadata for polling (requires object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a dedicated zaqar queue and post the metadata for polling. * - ``default_user_data_format`` = ``HEAT_CFNTOOLS`` - (String) Template default for how the user_data should be formatted for the server. For HEAT_CFNTOOLS, the user_data is bundled as part of the heat-cfntools cloud-init boot configuration data. For RAW the user_data is passed to Nova unmodified. For SOFTWARE_CONFIG user_data is bundled as part of the software config data, and metadata is derived from any associated SoftwareDeployment resources. * - ``deferred_auth_method`` = ``trusts`` - (String) Select deferred auth method, stored password or trusts. * - ``environment_dir`` = ``/etc/heat/environment.d`` - (String) The directory to search for environment files. * - ``error_wait_time`` = ``240`` - (Integer) The amount of time in seconds after an error has occurred that tasks may continue to run before being cancelled. * - ``event_purge_batch_size`` = ``200`` - (Integer) Controls how many events will be pruned whenever a stack's events are purged. Set this lower to keep more events at the expense of more frequent purges. * - ``executor_thread_pool_size`` = ``64`` - (Integer) Size of executor thread pool. * - ``host`` = ``localhost`` - (String) Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. * - ``keystone_backend`` = ``heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper`` - (String) Fully qualified class name to use as a keystone backend. * - ``max_interface_check_attempts`` = ``10`` - (Integer) Number of times to check whether an interface has been attached or detached. * - ``periodic_interval`` = ``60`` - (Integer) Seconds between running periodic tasks. * - ``plugin_dirs`` = ``/usr/lib64/heat, /usr/lib/heat, /usr/local/lib/heat, /usr/local/lib64/heat`` - (List) List of directories to search for plug-ins. * - ``reauthentication_auth_method`` = - (String) Allow reauthentication on token expiry, such that long-running tasks may complete. Note this defeats the expiry of any provided user tokens. * - ``template_dir`` = ``/etc/heat/templates`` - (String) The directory to search for template files. * - **[constraint_validation_cache]** - * - ``caching`` = ``True`` - (Boolean) Toggle to enable/disable caching when Orchestration Engine validates property constraints of stack.During property validation with constraints Orchestration Engine caches requests to other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. * - ``expiration_time`` = ``60`` - (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of validation constraints. * - **[healthcheck]** - * - ``backends`` = - (List) Additional backends that can perform health checks and report that information back as part of a request. * - ``detailed`` = ``False`` - (Boolean) Show more detailed information as part of the response * - ``disable_by_file_path`` = ``None`` - (String) Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. * - ``disable_by_file_paths`` = - (List) Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. * - ``path`` = ``/healthcheck`` - (String) DEPRECATED: The path to respond to healtcheck requests on. * - **[heat_all]** - * - ``enabled_services`` = ``engine, api, api_cfn`` - (List) Specifies the heat services that are enabled when running heat-all. Valid options are all or any combination of api, engine or api_cfn. * - **[profiler]** - * - ``connection_string`` = ``messaging://`` - (String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications. * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending notifications. * - ``enabled`` = ``False`` - (Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: * True: Enables the feature * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. * - ``es_doc_type`` = ``notification`` - (String) Document type for notification indexing in elasticsearch. * - ``es_scroll_size`` = ``10000`` - (Integer) Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). * - ``es_scroll_time`` = ``2m`` - (String) This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. * - ``hmac_keys`` = ``SECRET_KEY`` - (String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: [,,...], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. * - ``sentinel_service_name`` = ``mymaster`` - (String) Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster). * - ``socket_timeout`` = ``0.1`` - (Floating point) Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). * - ``trace_sqlalchemy`` = ``False`` - (Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. * - **[resource_finder_cache]** - * - ``caching`` = ``True`` - (Boolean) Toggle to enable/disable caching when Orchestration Engine looks for other OpenStack service resources using name or id. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. * - ``expiration_time`` = ``3600`` - (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of OpenStack service finder functions. * - **[revision]** - * - ``heat_revision`` = ``unknown`` - (String) Heat build revision. If you would prefer to manage your build revision separately, you can move this section to a different file and add it as another config option. * - **[service_extension_cache]** - * - ``caching`` = ``True`` - (Boolean) Toggle to enable/disable caching when Orchestration Engine retrieves extensions from other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. * - ``expiration_time`` = ``3600`` - (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of service extensions. * - **[volumes]** - * - ``backups_enabled`` = ``True`` - (Boolean) Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856. * - **[yaql]** - * - ``limit_iterators`` = ``200`` - (Integer) The maximum number of elements in collection expression can take for its evaluation. * - ``memory_quota`` = ``10000`` - (Integer) The maximum size of memory in bytes that expression can take for its evaluation. heat-10.0.0/doc/source/configuration/tables/heat-clients_heat.rst0000666000175100017510000000246113245511546025051 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-clients_heat: .. list-table:: Description of heat clients configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[clients_heat]** - * - ``ca_file`` = ``None`` - (String) Optional CA cert file to use in SSL connections. * - ``cert_file`` = ``None`` - (String) Optional PEM-formatted certificate chain file. * - ``endpoint_type`` = ``None`` - (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. * - ``insecure`` = ``None`` - (Boolean) If set, then the server's certificate will not be verified. * - ``key_file`` = ``None`` - (String) Optional PEM-formatted file that contains the private key. * - ``url`` = - (String) Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s. heat-10.0.0/doc/source/configuration/tables/heat-waitcondition_api.rst0000666000175100017510000000141613245511546026112 0ustar zuulzuul00000000000000.. Warning: Do not edit this file. It is automatically generated from the software project's code and your changes will be overwritten. The tool to generate this file lives in openstack-doc-tools repository. Please make any changes needed in the code, then run the autogenerate-config-doc tool from the openstack-doc-tools repository, or ask for help on the documentation mailing list, IRC channel or meeting. .. _heat-waitcondition_api: .. list-table:: Description of waitcondition API configuration options :header-rows: 1 :class: config-ref-table * - Configuration option = Default value - Description * - **[DEFAULT]** - * - ``heat_waitcondition_server_url`` = ``None`` - (String) URL of the Heat waitcondition server. heat-10.0.0/doc/source/configuration/sample_policy.rst0000666000175100017510000000150613245511546023055 0ustar zuulzuul00000000000000================== Heat Sample Policy ================== The following is a sample heat policy file that has been auto-generated from default policy values in code. If you're using the default policies, then the maintenance of this file is not necessary, and it should not be copied into a deployment. Doing so will result in duplicate policy definitions. It is here to help explain which policy operations protect specific heat APIs, but it is not suggested to copy and paste into a deployment unless you're planning on providing a different policy for an operation that is not the default. If you wish build a policy file, you can also use ``tox -e genpolicy`` to generate it. The sample policy file can also be downloaded in `file form <../_static/heat.policy.yaml.sample>`_. .. literalinclude:: ../_static/heat.policy.yaml.sample heat-10.0.0/doc/source/configuration/api.rst0000666000175100017510000000073113245511546020765 0ustar zuulzuul00000000000000=============================== Orchestration API configuration =============================== Configuration options ~~~~~~~~~~~~~~~~~~~~~ The following options allow configuration of the APIs that Orchestration supports. Currently this includes compatibility APIs for CloudFormation and a native API. .. include:: ./tables/heat-api.rst .. include:: ./tables/heat-cfn_api.rst .. include:: ./tables/heat-metadata_api.rst .. include:: ./tables/heat-waitcondition_api.rst heat-10.0.0/doc/source/configuration/index.rst0000666000175100017510000000024313245511554021320 0ustar zuulzuul00000000000000================ Configuring Heat ================ .. toctree:: :maxdepth: 2 api.rst clients.rst config-options.rst logs.rst sample_policy.rst heat-10.0.0/doc/source/configuration/logs.rst0000666000175100017510000000107013245511546021155 0ustar zuulzuul00000000000000======================= Orchestration log files ======================= The corresponding log file of each Orchestration service is stored in the ``/var/log/heat/`` directory of the host on which each service runs. .. list-table:: Log files used by Orchestration services :widths: 35 35 :header-rows: 1 * - Log filename - Service that logs to the file * - ``heat-api.log`` - Orchestration service API Service * - ``heat-engine.log`` - Orchestration service Engine Service * - ``heat-manage.log`` - Orchestration service events heat-10.0.0/doc/source/configuration/config-options.rst0000666000175100017510000000077313245511546023160 0ustar zuulzuul00000000000000========================================================== Additional configuration options for Orchestration service ========================================================== These options can also be set in the ``heat.conf`` file. .. include:: ./tables/heat-common.rst .. include:: ./tables/heat-crypt.rst .. include:: ./tables/heat-loadbalancer.rst .. include:: ./tables/heat-quota.rst .. include:: ./tables/heat-redis.rst .. include:: ./tables/heat-testing.rst .. include:: ./tables/heat-trustee.rst heat-10.0.0/doc/source/template_guide/0000775000175100017510000000000013245512113017567 5ustar zuulzuul00000000000000heat-10.0.0/doc/source/template_guide/hot_spec.rst0000666000175100017510000016337413245511546022156 0ustar zuulzuul00000000000000.. highlight: yaml :linenothreshold: 5 .. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .. _hot_spec: =============================================== Heat Orchestration Template (HOT) specification =============================================== HOT is a new template format meant to replace the Heat CloudFormation-compatible format (CFN) as the native format supported by the Heat over time. This specification explains in detail all elements of the HOT template format. An example driven guide to writing HOT templates can be found at :ref:`hot_guide`. Status ~~~~~~ HOT is considered reliable, supported, and standardized as of our Icehouse (April 2014) release. The Heat core team may make improvements to the standard, which very likely would be backward compatible. The template format is also versioned. Since Juno release, Heat supports multiple different versions of the HOT specification. Template structure ~~~~~~~~~~~~~~~~~~ HOT templates are defined in YAML and follow the structure outlined below. .. code-block:: yaml heat_template_version: 2016-10-14 description: # a description of the template parameter_groups: # a declaration of input parameter groups and order parameters: # declaration of input parameters resources: # declaration of template resources outputs: # declaration of output parameters conditions: # declaration of conditions heat_template_version This key with value ``2013-05-23`` (or a later date) indicates that the YAML document is a HOT template of the specified version. description This optional key allows for giving a description of the template, or the workload that can be deployed using the template. parameter_groups This section allows for specifying how the input parameters should be grouped and the order to provide the parameters in. This section is optional and can be omitted when necessary. parameters This section allows for specifying input parameters that have to be provided when instantiating the template. The section is optional and can be omitted when no input is required. resources This section contains the declaration of the single resources of the template. This section with at least one resource should be defined in any HOT template, or the template would not really do anything when being instantiated. outputs This section allows for specifying output parameters available to users once the template has been instantiated. This section is optional and can be omitted when no output values are required. conditions This optional section includes statements which can be used to restrict when a resource is created or when a property is defined. They can be associated with resources and resource properties in the ``resources`` section, also can be associated with outputs in the ``outputs`` sections of a template. Note: Support for this section is added in the Newton version. .. _hot_spec_template_version: Heat template version ~~~~~~~~~~~~~~~~~~~~~ The value of ``heat_template_version`` tells Heat not only the format of the template but also features that will be validated and supported. Beginning with the Newton release, the version can be either the date of the Heat release or the code name of the Heat release. Heat currently supports the following values for the ``heat_template_version`` key: 2013-05-23 ---------- The key with value ``2013-05-23`` indicates that the YAML document is a HOT template and it may contain features implemented until the Icehouse release. This version supports the following functions (some are back ported to this version):: get_attr get_file get_param get_resource list_join resource_facade str_replace Fn::Base64 Fn::GetAZs Fn::Join Fn::MemberListToMap Fn::Replace Fn::ResourceFacade Fn::Select Fn::Split Ref 2014-10-16 ---------- The key with value ``2014-10-16`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Juno release. This version removes most CFN functions that were supported in the Icehouse release, i.e. the ``2013-05-23`` version. So the supported functions now are:: get_attr get_file get_param get_resource list_join resource_facade str_replace Fn::Select 2015-04-30 ---------- The key with value ``2015-04-30`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Kilo release. This version adds the ``repeat`` function. So the complete list of supported functions is:: get_attr get_file get_param get_resource list_join repeat digest resource_facade str_replace Fn::Select 2015-10-15 ---------- The key with value ``2015-10-15`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Liberty release. This version removes the *Fn::Select* function, path based ``get_attr``/``get_param`` references should be used instead. Moreover ``get_attr`` since this version returns dict of all attributes for the given resource excluding *show* attribute, if there's no specified, e.g. :code:`{ get_attr: []}`. This version also adds the str_split function and support for passing multiple lists to the existing list_join function. The complete list of supported functions is:: get_attr get_file get_param get_resource list_join repeat digest resource_facade str_replace str_split 2016-04-08 ---------- The key with value ``2016-04-08`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Mitaka release. This version also adds the ``map_merge`` function which can be used to merge the contents of maps. The complete list of supported functions is:: digest get_attr get_file get_param get_resource list_join map_merge repeat resource_facade str_replace str_split 2016-10-14 | newton ------------------- The key with value ``2016-10-14`` or ``newton`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Newton release. This version adds the ``yaql`` function which can be used for evaluation of complex expressions, the ``map_replace`` function that can do key/value replacements on a mapping, and the ``if`` function which can be used to return corresponding value based on condition evaluation. The complete list of supported functions is:: digest get_attr get_file get_param get_resource list_join map_merge map_replace repeat resource_facade str_replace str_split yaql if This version adds ``equals`` condition function which can be used to compare whether two values are equal, the ``not`` condition function which acts as a NOT operator, the ``and`` condition function which acts as an AND operator to evaluate all the specified conditions, the ``or`` condition function which acts as an OR operator to evaluate all the specified conditions. The complete list of supported condition functions is:: equals get_param not and or 2017-02-24 | ocata ------------------- The key with value ``2017-02-24`` or ``ocata`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Ocata release. This version adds the ``str_replace_strict`` function which raises errors for missing params and the ``filter`` function which filters out values from lists. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join map_merge map_replace repeat resource_facade str_replace str_replace_strict str_split yaql if The complete list of supported condition functions is:: equals get_param not and or 2017-09-01 | pike ----------------- The key with value ``2017-09-01`` or ``pike`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Pike release. This version adds the ``make_url`` function for assembling URLs, the ``list_concat`` function for combining multiple lists, the ``list_concat_unique`` function for combining multiple lists without repeating items, the ``string_replace_vstrict`` function which raises errors for missing and empty params, and the ``contains`` function which checks whether specific value is in a sequence. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join make_url list_concat list_concat_unique contains map_merge map_replace repeat resource_facade str_replace str_replace_strict str_replace_vstrict str_split yaql if We support 'yaql' and 'contains' as condition functions in this version. The complete list of supported condition functions is:: equals get_param not and or yaql contains 2018-03-02 | queens ------------------- The key with value ``2018-03-02`` or ``queens`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Queens release. The complete list of supported functions is:: digest filter get_attr get_file get_param get_resource list_join make_url list_concat list_concat_unique contains map_merge map_replace repeat resource_facade str_replace str_replace_strict str_replace_vstrict str_split yaql if The complete list of supported condition functions is:: equals get_param not and or yaql contains .. _hot_spec_parameter_groups: Parameter groups section ~~~~~~~~~~~~~~~~~~~~~~~~ The ``parameter_groups`` section allows for specifying how the input parameters should be grouped and the order to provide the parameters in. These groups are typically used to describe expected behavior for downstream user interfaces. These groups are specified in a list with each group containing a list of associated parameters. The lists are used to denote the expected order of the parameters. Each parameter should be associated to a specific group only once using the parameter name to bind it to a defined parameter in the ``parameters`` section. .. code-block:: yaml parameter_groups: - label: description: parameters: - - label A human-readable label that defines the associated group of parameters. description This attribute allows for giving a human-readable description of the parameter group. parameters A list of parameters associated with this parameter group. param name The name of the parameter that is defined in the associated ``parameters`` section. .. _hot_spec_parameters: Parameters section ~~~~~~~~~~~~~~~~~~ The ``parameters`` section allows for specifying input parameters that have to be provided when instantiating the template. Such parameters are typically used to customize each deployment (e.g. by setting custom user names or passwords) or for binding to environment-specifics like certain images. Each parameter is specified in a separated nested block with the name of the parameters defined in the first line and additional attributes such as type or default value defined as nested elements. .. code-block:: yaml parameters: : type: label: description: default: hidden: constraints: immutable: tags: param name The name of the parameter. type The type of the parameter. Supported types are ``string``, ``number``, ``comma_delimited_list``, ``json`` and ``boolean``. This attribute is required. label A human readable name for the parameter. This attribute is optional. description A human readable description for the parameter. This attribute is optional. default A default value for the parameter. This value is used if the user doesn't specify his own value during deployment. This attribute is optional. hidden Defines whether the parameters should be hidden when a user requests information about a stack created from the template. This attribute can be used to hide passwords specified as parameters. This attribute is optional and defaults to ``false``. constraints A list of constraints to apply. The constraints are validated by the Orchestration engine when a user deploys a stack. The stack creation fails if the parameter value doesn't comply to the constraints. This attribute is optional. immutable Defines whether the parameter is updatable. Stack update fails, if this is set to ``true`` and the parameter value is changed. This attribute is optional and defaults to ``false``. tags A list of strings to specify the category of a parameter. This value is used to categorize a parameter so that users can group the parameters. This attribute is optional. The table below describes all currently supported types with examples: +----------------------+-------------------------------+------------------+ | Type | Description | Examples | +======================+===============================+==================+ | string | A literal string. | "String param" | +----------------------+-------------------------------+------------------+ | number | An integer or float. | "2"; "0.2" | +----------------------+-------------------------------+------------------+ | comma_delimited_list | An array of literal strings | ["one", "two"]; | | | that are separated by commas. | "one, two"; | | | The total number of strings | Note: "one, two" | | | should be one more than the | returns | | | total number of commas. | ["one", " two"] | +----------------------+-------------------------------+------------------+ | json | A JSON-formatted map or list. | {"key": "value"} | +----------------------+-------------------------------+------------------+ | boolean | Boolean type value, which can | "on"; "n" | | | be equal "t", "true", "on", | | | | "y", "yes", or "1" for true | | | | value and "f", "false", | | | | "off", "n", "no", or "0" for | | | | false value. | | +----------------------+-------------------------------+------------------+ The following example shows a minimalistic definition of two parameters .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application port_number: type: number label: Port Number description: Port number to be configured for the web server .. note:: The description and the label are optional, but defining these attributes is good practice to provide useful information about the role of the parameter to the user. .. _hot_spec_parameters_constraints: Parameter Constraints --------------------- The ``constraints`` block of a parameter definition defines additional validation constraints that apply to the value of the parameter. The parameter values provided by a user are validated against the constraints at instantiation time. The constraints are defined as a list with the following syntax .. code-block:: yaml constraints: - : description: constraint type Type of constraint to apply. The set of currently supported constraints is given below. constraint definition The actual constraint, depending on the constraint type. The concrete syntax for each constraint type is given below. description A description of the constraint. The text is presented to the user when the value he defines violates the constraint. If omitted, a default validation message is presented to the user. This attribute is optional. The following example shows the definition of a string parameter with two constraints. Note that while the descriptions for each constraint are optional, it is good practice to provide concrete descriptions to present useful messages to the user at deployment time. .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application constraints: - length: { min: 6, max: 8 } description: User name must be between 6 and 8 characters - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: User name must start with an uppercase character .. note:: While the descriptions for each constraint are optional, it is good practice to provide concrete descriptions so useful messages can be presented to the user at deployment time. The following sections list the supported types of parameter constraints, along with the concrete syntax for each type. length ++++++ The ``length`` constraint applies to parameters of type ``string``, ``comma_delimited_list`` and ``json``. It defines a lower and upper limit for the length of the string value or list/map collection. The syntax of the ``length`` constraint is .. code-block:: yaml length: { min: , max: } It is possible to define a length constraint with only a lower limit or an upper limit. However, at least one of ``min`` or ``max`` must be specified. range +++++ The ``range`` constraint applies to parameters of type ``number``. It defines a lower and upper limit for the numeric value of the parameter. The syntax of the ``range`` constraint is .. code-block:: yaml range: { min: , max: } It is possible to define a range constraint with only a lower limit or an upper limit. However, at least one of ``min`` or ``max`` must be specified. The minimum and maximum boundaries are included in the range. For example, the following range constraint would allow for all numeric values between 0 and 10 .. code-block:: yaml range: { min: 0, max: 10 } modulo ++++++ The ``modulo`` constraint applies to parameters of type ``number``. The value is valid if it is a multiple of ``step``, starting with ``offset``. The syntax of the ``modulo`` constraint is .. code-block:: yaml modulo: { step: , offset: } Both ``step`` and ``offset`` must be specified. For example, the following modulo constraint would only allow for odd numbers .. code-block:: yaml modulo: { step: 2, offset: 1 } allowed_values ++++++++++++++ The ``allowed_values`` constraint applies to parameters of type ``string`` or ``number``. It specifies a set of possible values for a parameter. At deployment time, the user-provided value for the respective parameter must match one of the elements of the list. The syntax of the ``allowed_values`` constraint is .. code-block:: yaml allowed_values: [ , , ... ] Alternatively, the following YAML list notation can be used .. code-block:: yaml allowed_values: - - - ... For example .. code-block:: yaml parameters: instance_type: type: string label: Instance Type description: Instance type for compute instances constraints: - allowed_values: - m1.small - m1.medium - m1.large allowed_pattern +++++++++++++++ The ``allowed_pattern`` constraint applies to parameters of type ``string``. It specifies a regular expression against which a user-provided parameter value must evaluate at deployment. The syntax of the ``allowed_pattern`` constraint is .. code-block:: yaml allowed_pattern: For example .. code-block:: yaml parameters: user_name: type: string label: User Name description: User name to be configured for the application constraints: - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: User name must start with an uppercase character custom_constraint +++++++++++++++++ The ``custom_constraint`` constraint adds an extra step of validation, generally to check that the specified resource exists in the backend. Custom constraints get implemented by plug-ins and can provide any kind of advanced constraint validation logic. The syntax of the ``custom_constraint`` constraint is .. code-block:: yaml custom_constraint: The ``name`` attribute specifies the concrete type of custom constraint. It corresponds to the name under which the respective validation plugin has been registered in the Orchestration engine. For example .. code-block:: yaml parameters: key_name type: string description: SSH key pair constraints: - custom_constraint: nova.keypair The following section lists the custom constraints and the plug-ins that support them. .. table_from_text:: ../../setup.cfg :header: Name,Plug-in :regex: (.*)=(.*) :start-after: heat.constraints = :end-before: heat.stack_lifecycle_plugins = :sort: .. _hot_spec_pseudo_parameters: Pseudo parameters ----------------- In addition to parameters defined by a template author, Heat also creates three parameters for every stack that allow referential access to the stack's name, stack's identifier and project's identifier. These parameters are named ``OS::stack_name`` for the stack name, ``OS::stack_id`` for the stack identifier and ``OS::project_id`` for the project identifier. These values are accessible via the `get_param`_ intrinsic function, just like user-defined parameters. .. note:: ``OS::project_id`` is available since 2015.1 (Kilo). .. _hot_spec_resources: Resources section ~~~~~~~~~~~~~~~~~ The ``resources`` section defines actual resources that make up a stack deployed from the HOT template (for instance compute instances, networks, storage volumes). Each resource is defined as a separate block in the ``resources`` section with the following syntax .. code-block:: yaml resources: : type: properties: : metadata: depends_on: update_policy: deletion_policy: external_id: condition: resource ID A resource ID which must be unique within the ``resources`` section of the template. type The resource type, such as ``OS::Nova::Server`` or ``OS::Neutron::Port``. This attribute is required. properties A list of resource-specific properties. The property value can be provided in place, or via a function (see :ref:`hot_spec_intrinsic_functions`). This section is optional. metadata Resource-specific metadata. This section is optional. depends_on Dependencies of the resource on one or more resources of the template. See :ref:`hot_spec_resources_dependencies` for details. This attribute is optional. update_policy Update policy for the resource, in the form of a nested dictionary. Whether update policies are supported and what the exact semantics are depends on the type of the current resource. This attribute is optional. deletion_policy Deletion policy for the resource. The allowed deletion policies are ``Delete``, ``Retain``, and ``Snapshot``. Beginning with ``heat_template_version`` ``2016-10-14``, the lowercase equivalents ``delete``, ``retain``, and ``snapshot`` are also allowed. This attribute is optional; the default policy is to delete the physical resource when deleting a resource from the stack. external_id Allows for specifying the resource_id for an existing external (to the stack) resource. External resources can not depend on other resources, but we allow other resources depend on external resource. This attribute is optional. Note: when this is specified, properties will not be used for building the resource and the resource is not managed by Heat. This is not possible to update that attribute. Also resource won't be deleted by heat when stack is deleted. condition Condition for the resource. Which decides whether to create the resource or not. This attribute is optional. Note: Support ``condition`` for resource is added in the Newton version. Depending on the type of resource, the resource block might include more resource specific data. All resource types that can be used in CFN templates can also be used in HOT templates, adapted to the YAML structure as outlined above. The following example demonstrates the definition of a simple compute resource with some fixed property values .. code-block:: yaml resources: my_instance: type: OS::Nova::Server properties: flavor: m1.small image: F18-x86_64-cfntools .. _hot_spec_resources_dependencies: Resource dependencies --------------------- The ``depends_on`` attribute of a resource defines a dependency between this resource and one or more other resources. If a resource depends on just one other resource, the ID of the other resource is specified as string of the ``depends_on`` attribute, as shown in the following example .. code-block:: yaml resources: server1: type: OS::Nova::Server depends_on: server2 server2: type: OS::Nova::Server If a resource depends on more than one other resources, the value of the ``depends_on`` attribute is specified as a list of resource IDs, as shown in the following example .. code-block:: yaml resources: server1: type: OS::Nova::Server depends_on: [ server2, server3 ] server2: type: OS::Nova::Server server3: type: OS::Nova::Server .. _hot_spec_outputs: Outputs section ~~~~~~~~~~~~~~~ The ``outputs`` section defines output parameters that should be available to the user after a stack has been created. This would be, for example, parameters such as IP addresses of deployed instances, or URLs of web applications deployed as part of a stack. Each output parameter is defined as a separate block within the outputs section according to the following syntax .. code-block:: yaml outputs: : description: value: condition: parameter name The output parameter name, which must be unique within the ``outputs`` section of a template. description A short description of the output parameter. This attribute is optional. parameter value The value of the output parameter. This value is usually resolved by means of a function. See :ref:`hot_spec_intrinsic_functions` for details about the functions. This attribute is required. condition To conditionally define an output value. None value will be shown if the condition is False. This attribute is optional. Note: Support ``condition`` for output is added in the Newton version. The example below shows how the IP address of a compute resource can be defined as an output parameter .. code-block:: yaml outputs: instance_ip: description: IP address of the deployed compute instance value: { get_attr: [my_instance, first_address] } Conditions section ~~~~~~~~~~~~~~~~~~ The ``conditions`` section defines one or more conditions which are evaluated based on input parameter values provided when a user creates or updates a stack. The condition can be associated with resources, resource properties and outputs. For example, based on the result of a condition, user can conditionally create resources, user can conditionally set different values of properties, and user can conditionally give outputs of a stack. The ``conditions`` section is defined with the following syntax .. code-block:: yaml conditions: : {expression1} : {expression2} ... condition name The condition name, which must be unique within the ``conditions`` section of a template. expression The expression which is expected to return True or False. Usually, the condition functions can be used as expression to define conditions:: equals get_param not and or yaql Note: In condition functions, you can reference a value from an input parameter, but you cannot reference resource or its attribute. We support referencing other conditions (by condition name) in condition functions. We support 'yaql' as condition function in the Pike version. An example of conditions section definition .. code-block:: yaml conditions: cd1: True cd2: get_param: param1 cd3: equals: - get_param: param2 - yes cd4: not: equals: - get_param: param3 - yes cd5: and: - equals: - get_param: env_type - prod - not: equals: - get_param: zone - beijing cd6: or: - equals: - get_param: zone - shanghai - equals: - get_param: zone - beijing cd7: not: cd4 cd8: and: - cd1 - cd2 cd9: yaql: expression: $.data.services.contains('heat') data: services: get_param: ServiceNames cd10: contains: - 'neutron' - get_param: ServiceNames The example below shows how to associate condition with resources .. code-block:: yaml parameters: env_type: default: test type: string conditions: create_prod_res: {equals : [{get_param: env_type}, "prod"]} resources: volume: type: OS::Cinder::Volume condition: create_prod_res properties: size: 1 The 'create_prod_res' condition evaluates to true if the 'env_type' parameter is equal to 'prod'. In the above sample template, the 'volume' resource is associated with the 'create_prod_res' condition. Therefore, the 'volume' resource is created only if the 'env_type' is equal to 'prod'. The example below shows how to conditionally define an output .. code-block:: yaml outputs: vol_size: value: {get_attr: [my_volume, size]} condition: create_prod_res In the above sample template, the 'vol_size' output is associated with the 'create_prod_res' condition. Therefore, the 'vol_size' output is given corresponding value only if the 'env_type' is equal to 'prod', otherwise the value of the output is None. .. _hot_spec_intrinsic_functions: Intrinsic functions ~~~~~~~~~~~~~~~~~~~ HOT provides a set of intrinsic functions that can be used inside templates to perform specific tasks, such as getting the value of a resource attribute at runtime. The following section describes the role and syntax of the intrinsic functions. Note: these functions can only be used within the "properties" section of each resource or in the outputs section. get_attr -------- The ``get_attr`` function references an attribute of a resource. The attribute value is resolved at runtime using the resource instance created from the respective resource definition. Path based attribute referencing using keys or indexes requires ``heat_template_version`` ``2014-10-16`` or higher. The syntax of the ``get_attr`` function is .. code-block:: yaml get_attr: - - - (optional) - (optional) - ... resource name The resource name for which the attribute needs to be resolved. The resource name must exist in the ``resources`` section of the template. attribute name The attribute name to be resolved. If the attribute returns a complex data structure such as a list or a map, then subsequent keys or indexes can be specified. These additional parameters are used to navigate the data structure to return the desired value. The following example demonstrates how to use the :code:`get_attr` function: .. code-block:: yaml resources: my_instance: type: OS::Nova::Server # ... outputs: instance_ip: description: IP address of the deployed compute instance value: { get_attr: [my_instance, first_address] } instance_private_ip: description: Private IP address of the deployed compute instance value: { get_attr: [my_instance, networks, private, 0] } In this example, if the ``networks`` attribute contained the following data:: {"public": ["2001:0db8:0000:0000:0000:ff00:0042:8329", "1.2.3.4"], "private": ["10.0.0.1"]} then the value of ``get_attr`` function would resolve to ``10.0.0.1`` (first item of the ``private`` entry in the ``networks`` map). From ``heat_template_version``: '2015-10-15' is optional and if is not specified, ``get_attr`` returns dict of all attributes for the given resource excluding *show* attribute. In this case syntax would be next: .. code-block:: yaml get_attr: - get_file -------- The ``get_file`` function returns the content of a file into the template. It is generally used as a file inclusion mechanism for files containing scripts or configuration files. The syntax of ``get_file`` function is .. code-block:: yaml get_file: The ``content key`` is used to look up the ``files`` dictionary that is provided in the REST API call. The Orchestration client command (``heat``) is ``get_file`` aware and populates the ``files`` dictionary with the actual content of fetched paths and URLs. The Orchestration client command supports relative paths and transforms these to the absolute URLs required by the Orchestration API. .. note:: The ``get_file`` argument must be a static path or URL and not rely on intrinsic functions like ``get_param``. the Orchestration client does not process intrinsic functions (they are only processed by the Orchestration engine). The example below demonstrates the ``get_file`` function usage with both relative and absolute URLs .. code-block:: yaml resources: my_instance: type: OS::Nova::Server properties: # general properties ... user_data: get_file: my_instance_user_data.sh my_other_instance: type: OS::Nova::Server properties: # general properties ... user_data: get_file: http://example.com/my_other_instance_user_data.sh The ``files`` dictionary generated by the Orchestration client during instantiation of the stack would contain the following keys: * :file:`file:///path/to/my_instance_user_data.sh` * :file:`http://example.com/my_other_instance_user_data.sh` get_param --------- The ``get_param`` function references an input parameter of a template. It resolves to the value provided for this input parameter at runtime. The syntax of the ``get_param`` function is .. code-block:: yaml get_param: - - (optional) - (optional) - ... parameter name The parameter name to be resolved. If the parameters returns a complex data structure such as a list or a map, then subsequent keys or indexes can be specified. These additional parameters are used to navigate the data structure to return the desired value. The following example demonstrates the use of the ``get_param`` function .. code-block:: yaml parameters: instance_type: type: string label: Instance Type description: Instance type to be used. server_data: type: json resources: my_instance: type: OS::Nova::Server properties: flavor: { get_param: instance_type} metadata: { get_param: [ server_data, metadata ] } key_name: { get_param: [ server_data, keys, 0 ] } In this example, if the ``instance_type`` and ``server_data`` parameters contained the following data:: {"instance_type": "m1.tiny", {"server_data": {"metadata": {"foo": "bar"}, "keys": ["a_key","other_key"]}}} then the value of the property ``flavor`` would resolve to ``m1.tiny``, ``metadata`` would resolve to ``{"foo": "bar"}`` and ``key_name`` would resolve to ``a_key``. get_resource ------------ The ``get_resource`` function references another resource within the same template. At runtime, it is resolved to reference the ID of the referenced resource, which is resource type specific. For example, a reference to a floating IP resource returns the respective IP address at runtime. The syntax of the ``get_resource`` function is .. code-block:: yaml get_resource: The resource ID of the referenced resource is given as single parameter to the ``get_resource`` function. For example .. code-block:: yaml resources: instance_port: type: OS::Neutron::Port properties: ... instance: type: OS::Nova::Server properties: ... networks: port: { get_resource: instance_port } list_join --------- The ``list_join`` function joins a list of strings with the given delimiter. The syntax of the ``list_join`` function is .. code-block:: yaml list_join: - - For example .. code-block:: yaml list_join: [', ', ['one', 'two', 'and three']] This resolve to the string ``one, two, and three``. From HOT version ``2015-10-15`` you may optionally pass additional lists, which will be appended to the previous lists to join. For example:: list_join: [', ', ['one', 'two'], ['three', 'four']] This resolve to the string ``one, two, three, four``. From HOT version ``2015-10-15`` you may optionally also pass non-string list items (e.g json/map/list parameters or attributes) and they will be serialized as json before joining. digest ------ The ``digest`` function allows for performing digest operations on a given value. This function has been introduced in the Kilo release and is usable with HOT versions later than ``2015-04-30``. The syntax of the ``digest`` function is .. code-block:: yaml digest: - - algorithm The digest algorithm. Valid algorithms are the ones provided natively by hashlib (md5, sha1, sha224, sha256, sha384, and sha512) or any one provided by OpenSSL. value The value to digest. This function will resolve to the corresponding hash of the value. For example .. code-block:: yaml # from a user supplied parameter pwd_hash: { digest: ['sha512', { get_param: raw_password }] } The value of the digest function would resolve to the corresponding hash of the value of ``raw_password``. repeat ------ The ``repeat`` function allows for dynamically transforming lists by iterating over the contents of one or more source lists and replacing the list elements into a template. The result of this function is a new list, where the elements are set to the template, rendered for each list item. The syntax of the ``repeat`` function is .. code-block:: yaml repeat: template: