virtio-queue-0.11.0/.cargo_vcs_info.json0000644000000001520000000000100135470ustar { "git": { "sha1": "a6365caae501c57acb22e6227a4659ffec328ea5" }, "path_in_vcs": "virtio-queue" }virtio-queue-0.11.0/CHANGELOG.md000064400000000000000000000132221046102023000141520ustar 00000000000000# Upcoming # v0.11.0 ## Changed - Updated vm-memory from 0.13.1 to 0.14.0 - Updated vmm-sys-util from 0.11.0 to 0.12.1 # v0.10.0 Identical to v0.9.1, which was incorrectly published as minor release. # v0.9.1 - yanked This version got yanked. It should have been a major release. The vm-memory dependency - which received a major bump - is part of the public interface. ## Changed - Updated vm-memory from 0.12.0 to 0.13.1. - Updated dev-dependencies: - criterion (0.3.0 -> 0.5.1) - memoffset (0.7.1 -> 0.9.0) # v0.9.0 ## Changed - Updated vm-memory from 0.11.0 to 0.12.0. # v0.8.0 ## Changed - Terminate iterating descriptor chains that are longer than 2^32 bytes. - Updated vm-memory from 0.10.0 to 0.11.0. - Updated virtio-bindings from 0.1.0 to 0.2.0. # v0.7.1 ## Fixed - Skip indirect descriptor address alignment check, the virtio spec has no alignment requirement on this, see `2.6.5.3 Indirect Descriptors` and `2.7.7 Indirect Flag: Scatter-Gather Support` in virtio 1.0. - Update the `add_desc_chains` mock function such that it works on big endian hosts as well. - Check that the queue is ready for processing requests when calling the iterator functions. For now the checks are limited to the avail address and the ready fields, but should be extended in the future to account for other fields that could signal an invalid queue. This behavior can be triggered by doing a `reset` followed by a `pop_descriptor_chain`. # v0.7.0 ## Changed - Updated vmm-sys-util from 0.10.0 to 0.11.0. - Updated vm-memory from 0.9.0 to 0.10.0. # v0.6.1 ## Fixed - Return an error if the number of available descriptor chains exposed by the driver exceeds the queue size. This way we avoid potential hanging and Denial-of-Service in the VMM, that was possible before by iterating multiple times over the same chains. # v0.6.0 ## Added - Derive `Eq` for structures that derive `PartialEq`. ## Changed - Use `add_desc_chains` in tests - Update dependencies: `vm-memory` from `0.8.0` to `0.9.0` and `log` from `0.4.6` to `0.4.17`. - Upgrade to Rust 2021 edition. # v0.5.0 ## Added - Added getters and setters for the Virtio Queue fields. - Added the `state` method for retrieving the `QueueState` of a `Queue`. ## Fixed - Validate the state of the Virtio Queue when restoring from state and return errors on invalid input. ## Removed - Removed the wrapper over the Virtio Queue that was wrapping the Guest Memory. VMMs can define this wrapper if needed, but this is no longer provided as part of virtio-queue crate so that the naming scheme can be simplified. As a consequence, a couple of functions now receive the memory as a parameter (more details in the Changed section). - Removed `num_added` field from the `QueueState` because this is an implementation detail of the notification suppression feature and thus should not be part of the state. - Removed `QueueGuard` and `lock_with_memory`. ## Changed - `QueueState` is now renamed to `Queue`. - `QueueStateSync` is now renamed to `QueueSync`. - The `QueueState` structure now represents the state of the `Queue` without any implementation details. This can be used for implementing save/restore. - Initializing a `Queue` now returns an error in case the `max_size` is invalid. - The `Queue` fields are now private and can be updated only through the dedicated setters. - The following Queue methods now receive the memory as a parameter: `iter`, `is_valid`, `add_used`, `needs_notification`, `enable_notification`, `disable_notification`, `avail_idx`, `used_idx`. - Use the constant definition from the `virtio-queue` crate. # v0.4.0 ## Fixed - [[#173]](https://github.com/rust-vmm/vm-virtio/pull/173) Fix potential division by zero in iterator when the queue size is 0. ## Changed - [[#162]](https://github.com/rust-vmm/vm-virtio/pull/162) Added error handling in the mock interface and the ability to create multiple descriptor chains for testing in order to support running fuzzing. - [[#174]](https://github.com/rust-vmm/vm-virtio/pull/174) Updated the `avail_idx` and `used_idx` documentation to specify when these functions panic. # v0.3.0 ## Added - [[#148]](https://github.com/rust-vmm/vm-virtio/pull/148): `QueueStateOwnedT` trait that stands for queue objects which are exclusively owned and accessed by a single thread of execution. - [[#148]](https://github.com/rust-vmm/vm-virtio/pull/148): Added the `pop_descriptor_chain` method, which can be used to consume descriptor chains from the available ring without using an iterator, to `QueueStateT` and `QueueGuard`. Also added `go_to_previous_position()` to `QueueGuard`, which enables decrementing the next available index by one position, which effectively undoes the consumption of a descriptor chain in some use cases. - [[#151]](https://github.com/rust-vmm/vm-virtio/pull/151): Added `MockSplitQueue::add_desc_chain()`, which places a descriptor chain at the specified offset in the descriptor table. - [[#153]](https://github.com/rust-vmm/vm-virtio/pull/153): Added `QueueStateT::size()` to return the size of the queue. ## Changed - The minimum version of the `vm-memory` dependency is now `v0.8.0` - [[#161]](https://github.com/rust-vmm/vm-virtio/pull/161): Improve the efficiency of `needs_notification` ## Removed - [[#153]](https://github.com/rust-vmm/vm-virtio/pull/153): `#[derive(Clone)]` for `QueueState` # v0.2.0 ## Added - *Testing Interface*: Added the possibility to initialize a mock descriptor chain from a list of descriptors. - Added setters and getters for the queue fields required for extending the `Queue` in VMMs. ## Fixed - Apply the appropriate endianness conversion on `used_idx`. # v0.1.0 This is the first release of the crate. virtio-queue-0.11.0/Cargo.toml0000644000000023040000000000100115460ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" name = "virtio-queue" version = "0.11.0" authors = ["The Chromium OS Authors"] description = "virtio queue implementation" readme = "README.md" keywords = ["virtio"] license = "Apache-2.0 OR BSD-3-Clause" repository = "https://github.com/rust-vmm/vm-virtio" resolver = "1" [[bench]] name = "main" harness = false [dependencies.log] version = "0.4.17" [dependencies.virtio-bindings] version = "0.2.2" [dependencies.vm-memory] version = "0.14.0" [dependencies.vmm-sys-util] version = "0.12.1" [dev-dependencies.criterion] version = "0.5.1" [dev-dependencies.memoffset] version = "0.9.0" [dev-dependencies.vm-memory] version = "0.14.0" features = [ "backend-mmap", "backend-atomic", ] [features] test-utils = [] virtio-queue-0.11.0/Cargo.toml.orig000064400000000000000000000011771046102023000152360ustar 00000000000000[package] name = "virtio-queue" version = "0.11.0" authors = ["The Chromium OS Authors"] description = "virtio queue implementation" repository = "https://github.com/rust-vmm/vm-virtio" keywords = ["virtio"] readme = "README.md" license = "Apache-2.0 OR BSD-3-Clause" edition = "2021" [features] test-utils = [] [dependencies] vm-memory = "0.14.0" vmm-sys-util = "0.12.1" log = "0.4.17" virtio-bindings = { path="../virtio-bindings", version = "0.2.2" } [dev-dependencies] criterion = "0.5.1" vm-memory = { version = "0.14.0", features = ["backend-mmap", "backend-atomic"] } memoffset = "0.9.0" [[bench]] name = "main" harness = false virtio-queue-0.11.0/README.md000064400000000000000000000267561046102023000136400ustar 00000000000000# virtio-queue The `virtio-queue` crate provides a virtio device implementation for a virtio queue, a virtio descriptor and a chain of such descriptors. Two formats of virtio queues are defined in the specification: split virtqueues and packed virtqueues. The `virtio-queue` crate offers support only for the [split virtqueues](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-240006) format. The purpose of the virtio-queue API is to be consumed by virtio device implementations (such as the block device or vsock device). The main abstraction is the `Queue`. The crate is also defining a state object for the queue, i.e. `QueueState`. ## Usage Let’s take a concrete example of how a device would work with a queue, using the MMIO bus. First, it is important to mention that the mandatory parts of the virtio interface are the following: - the device status field → provides an indication of [the completed steps](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-100001) of the device initialization routine, - the feature bits → [the features](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-100001) the driver/device understand(s), - [notifications](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-170003), - one or more [virtqueues](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-230005) → the mechanism for data transport between the driver and device. Each virtqueue consists of three parts: - Descriptor Table, - Available Ring, - Used Ring. Before booting the virtual machine (VM), the VMM does the following set up: 1. initialize an array of Queues using the Queue constructor. 2. register the device to the MMIO bus, so that the driver can later send read/write requests from/to the MMIO space, some of those requests also set up the queues’ state. 3. other pre-boot configurations, such as registering a fd for the interrupt assigned to the device, fd which will be later used by the device to inform the driver that it has information to communicate. After the boot of the VM, the driver starts sending read/write requests to configure things like: * the supported features; * queue parameters. The following setters are used for the queue set up: * `set_size` → for setting the size of the queue. * `set_ready` → configure the queue to the `ready for processing` state. * `set_desc_table_address`, `set_avail_ring_address`, `set_used_ring_address` → configure the guest address of the constituent parts of the queue. * `set_event_idx` → it is called as part of the features' negotiation in the `virtio-device` crate, and is enabling or disabling the VIRTIO_F_RING_EVENT_IDX feature. * the device activation. As part of this activation, the device can also create a queue handler for the device, that can be later used to process the queue. Once the queues are ready, the device can be used. The steady state operation of a virtio device follows a model where the driver produces descriptor chains which are consumed by the device, and both parties need to be notified when new elements have been placed on the associate ring to avoid busy polling. The precise notification mechanism is left up to the VMM that incorporates the devices and queues (it usually involves things like MMIO vm exits and interrupt injection into the guest). The queue implementation is agnostic to the notification mechanism in use, and it exposes methods and functionality (such as iterators) that are called from the outside in response to a notification event. ### Data transmission using virtqueues The basic principle of how the queues are used by the device/driver is the following, as showed in the diagram below as well: 1. when the guest driver has a new request (buffer), it allocates free descriptor(s) for the buffer in the descriptor table, chaining as necessary. 2. the driver adds a new entry with the head index of the descriptor chain describing the request, in the available ring entries. 3. the driver increments the `idx` with the number of new entries, the diagram shows the simple use case of only one new entry. 4. the driver sends an available buffer notification to the device if such notifications are not suppressed. 5. the device will at some point consume that request, by first reading the `idx` field from the available ring. This can be directly achieved with `Queue::avail_idx`, but we do not recommend to the consumers of the crate to use this because it is already called behind the scenes by the iterator over all available descriptor chain heads. 6. the device gets the index of the descriptor chain(s) corresponding to the read `idx` value. 7. the device reads the corresponding descriptor(s) from the descriptor table. 8. the device adds a new entry in the used ring by using `Queue::add_used`; the entry is defined in the spec as `virtq_used_elem`, and in `virtio-queue` as `VirtqUsedElem`. This structure is holding both the index of the descriptor chain and the number of bytes that were written to the memory as part of serving the request. 9. the device increments the `idx` from the used ring; this is done as part of the `Queue::add_used` that was mentioned above. 10. the device sends a used buffer notification to the driver if such notifications are not suppressed. ![queue](https://raw.githubusercontent.com/rust-vmm/vm-virtio/main/virtio-queue/docs/images/queue.png) A descriptor is storing four fields, with the first two, `addr` and `len`, pointing to the data in memory to which the descriptor refers, as shown in the diagram below. The `flags` field is useful for indicating if, for example, the buffer is device readable or writable, or if we have another descriptor chained after this one (VIRTQ_DESC_F_NEXT flag set). `next` field is storing the index of the next descriptor if VIRTQ_DESC_F_NEXT is set. ![descriptor](https://raw.githubusercontent.com/rust-vmm/vm-virtio/main/virtio-queue/docs/images/descriptor.png) **Requirements for device implementation** * Abstractions from virtio-queue such as `DescriptorChain` can be used to parse descriptors provided by the device, which represent input or output memory areas for device I/O. A descriptor is essentially an (address, length) pair, which is subsequently used by the device model operation. We do not check the validity of the descriptors, and instead expect any validations to happen when the device implementation is attempting to access the corresponding areas. Early checks can add non-negligible additional costs, and exclusively relying upon them may lead to time-of-check-to-time-of-use race conditions. * The device should validate before reading/writing to a buffer that it is device-readable/device-writable. ## Design `QueueT` is a trait that allows different implementations for a `Queue` object for single-threaded context and multi-threaded context. The implementations provided in `virtio-queue` are: 1. `Queue` → it is used for the single-threaded context. 2. `QueueSync` → it is used for the multi-threaded context, and is simply a wrapper over an `Arc>`. Besides the above abstractions, the `virtio-queue` crate provides also the following ones: * `Descriptor` → which mostly offers accessors for the members of the `Descriptor`. * `DescriptorChain` → provides accessors for the `DescriptorChain`’s members and an `Iterator` implementation for iterating over the `DescriptorChain`, there is also an abstraction for iterators over just the device readable or just the device writable descriptors (`DescriptorChainRwIter`). * `AvailIter` - is a consuming iterator over all available descriptor chain heads in the queue. ## Save/Restore Queue The `Queue` allows saving the state through the `state` function which returns a `QueueState`. `Queue` objects can be created from a previously saved state by using `QueueState::try_from`. The VMM should check for errors when restoring a `Queue` from a previously saved state. ### Notification suppression A big part of the `virtio-queue` crate consists of the notification suppression support. As already mentioned, the driver can send an available buffer notification to the device when there are new entries in the available ring, and the device can send a used buffer notification to the driver when there are new entries in the used ring. There might be cases when sending a notification each time these scenarios happen is not efficient, for example when the driver is processing the used ring, it would not need to receive another used buffer notification. The mechanism for suppressing the notifications is detailed in the following sections from the specification: - [Used Buffer Notification Suppression](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-400007), - [Available Buffer Notification Suppression](https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-4800010). The `Queue` abstraction is proposing the following sequence of steps for processing new available ring entries: 1. the device first disables the notifications to make the driver aware it is processing the available ring and does not want interruptions, by using `Queue::disable_notification`. Notifications are disabled by the device either if VIRTIO_F_EVENT_IDX is not negotiated, and VIRTQ_USED_F_NO_NOTIFY is set in the `flags` field of the used ring, or if VIRTIO_F_EVENT_IDX is negotiated, and `avail_event` value is not updated, i.e. it remains set to the latest `idx` value of the available ring that was already notified by the driver. 2. the device processes the new entries by using the `AvailIter` iterator. 3. the device can enable the notifications now, by using `Queue::enable_notification`. Notifications are enabled by the device either if VIRTIO_F_EVENT_IDX is not negotiated, and 0 is set in the `flags` field of the used ring, or if VIRTIO_F_EVENT_IDX is negotiated, and `avail_event` value is set to the smallest `idx` value of the available ring that was not already notified by the driver. This way the device makes sure that it won’t miss any notification. The above steps should be done in a loop to also handle the less likely case where the driver added new entries just before we re-enabled notifications. On the driver side, the `Queue` provides the `needs_notification` method which should be used each time the device adds a new entry to the used ring. Depending on the `used_event` value and on the last used value (`signalled_used`), `needs_notification` returns true to let the device know it should send a notification to the guest. ## Assumptions We assume the users of the `Queue` implementation won’t attempt to use the queue before checking that the `ready` bit is set. This can be verified by calling `Queue::is_valid` which, besides this, is also checking that the three queue parts are valid memory regions. We assume consumers will use `AvailIter::go_to_previous_position` only in single-threaded contexts. We assume the users will consume the entries from the available ring in the recommended way from the documentation, i.e. device starts processing the available ring entries, disables the notifications, processes the entries, and then re-enables notifications. ## License This project is licensed under either of - [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0 - [BSD-3-Clause License](https://opensource.org/licenses/BSD-3-Clause) virtio-queue-0.11.0/benches/main.rs000064400000000000000000000007301046102023000152420ustar 00000000000000// Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause extern crate criterion; mod queue; use criterion::{criterion_group, criterion_main, Criterion}; use queue::benchmark_queue; criterion_group! { name = benches; config = Criterion::default().sample_size(200).measurement_time(std::time::Duration::from_secs(20)); targets = benchmark_queue } criterion_main! { benches, } virtio-queue-0.11.0/benches/queue/mod.rs000064400000000000000000000051051046102023000162220ustar 00000000000000// Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause use criterion::{black_box, BatchSize, Criterion}; use virtio_queue::{Queue, QueueOwnedT, QueueT}; use vm_memory::{GuestAddress, GuestMemory, GuestMemoryMmap}; use virtio_queue::mock::MockSplitQueue; pub fn benchmark_queue(c: &mut Criterion) { fn walk_queue(q: &mut Queue, mem: &M) -> (usize, usize) { let mut num_chains = 0; let mut num_descriptors = 0; q.iter(mem).unwrap().for_each(|chain| { num_chains += 1; chain.for_each(|_| num_descriptors += 1); }); (num_chains, num_descriptors) } fn bench_queue(c: &mut Criterion, bench_name: &str, setup: S, mut routine: R) where S: FnMut() -> Queue + Clone, R: FnMut(Queue), { c.bench_function(bench_name, move |b| { b.iter_batched( setup.clone(), |q| routine(black_box(q)), BatchSize::SmallInput, ) }); } let mem = GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0x0), 0x1_0000_0000)]).unwrap(); let queue_with_chains = |num_chains, len, indirect| { let mut mq = MockSplitQueue::new(&mem, 256); for _ in 0..num_chains { if indirect { mq.add_indirect_chain(len).unwrap(); } else { mq.add_chain(len).unwrap(); } } mq.create_queue().unwrap() }; let empty_queue = || { let mq = MockSplitQueue::new(&mem, 256); mq.create_queue().unwrap() }; for indirect in [false, true].iter().copied() { bench_queue( c, &format!("single chain (indirect={})", indirect), || queue_with_chains(1, 128, indirect), |mut q| { let (num_chains, num_descriptors) = walk_queue(&mut q, &mem); assert_eq!(num_chains, 1); assert_eq!(num_descriptors, 128); }, ); bench_queue( c, &format!("multiple chains (indirect={})", indirect), || queue_with_chains(128, 1, indirect), |mut q| { let (num_chains, num_descriptors) = walk_queue(&mut q, &mem); assert_eq!(num_chains, 128); assert_eq!(num_descriptors, 128); }, ); } bench_queue(c, "add used", empty_queue, |mut q| { for _ in 0..128 { q.add_used(&mem, 123, 0x1000).unwrap(); } }); } virtio-queue-0.11.0/docs/TESTING.md000064400000000000000000000021321046102023000147260ustar 00000000000000# Testing The `virtio-queue` crate is tested using: - unit tests - defined in their corresponding modules, - performance tests - defined in the [benches](../benches) directory. For now, the benchmarks are not run as part of the CI, but they can be run locally. The crate provides a mocking framework for the driver side of a virtio queue, in the [mock](../src/mock.rs) module. This module is compiled only when using the `test-utils` feature. To run all the unit tests (which include the documentation examples), and the performance tests in this crate, you need to specify the `test-utils` feature, otherwise the build fails. ```bash cargo test --features test-utils cargo bench --features test-utils cargo test --doc --features test-utils ``` The mocking framework and the helpers it provides can be used in other crates as well in order to test, for example, a specific device implementation. To be able to use these test utilities, add the following to your `Cargo.toml` in the `[dev-dependencies]` section: ```toml [dev-dependencies] virtio-queue = { version = "0.1.0", features = ["test-utils"] } ``` virtio-queue-0.11.0/docs/images/descriptor.png000064400000000000000000000765441046102023000174420ustar 00000000000000PNG  IHDR/r?zTXtRaw profile type exifxU 1C# ƩN6%͵QXVbL4g276wŽmpnKW;:RrhԬ4Sy"ӡ_ K,_JH iTXtXML:com.adobe.xmp sBIT|d IDATxu|UϽNҝ!EJH+"(!(!H]c۽"?Tj~x{<;|< 0y̙U""""""""""""""""" /"""""" /"""""" /""""""""""""""""""""""" /"""""" /"""""" /""""""""""""""""" /"""""" /"""""" /"""""""""""""""""rolU""""YHiLÜQ;c$\'Ҿ(A%p3DEDDDD|ڡ<5UkxLjJKiҢ!ըrx(5A{B^DDDDDÍrIfp0RHݷ[H61/""""YB߭Dj:Tp-ZLZ6676 癒>_қ<(9b?陓ԡy4זcgE<Ӧ+;7aal!z^hۢ%"[N3E!r8{~lU|KmQC_w"w0S]DF b H܏ҹ {+tW`Dn`gi4.hR+Uv9'\5y+\@R W W(Oʹ Kdh uS>qsN&\i.[LM13;gb6|3/::e""""R ȑ뗯v"Α*?r|/׮~;F}> QCc{LL>6I`Md'sO4`ufr8P1,O L( Gm!rT|˖/| Ylcδݙag|Bʺi?,Vl|9{`c6a„iIXe ƶѥa * ]dG)o z*vuSeߦ#CC}F+P2WnF|'k)88H"ٵp׆:' Я,wˤopKbwV[Ǻ2m~9)\A̘oЅs'ѪFH#cX,ϑ4]v'(:!'n$# i? .ї&|1ǑUf .\^{?>)!1\KŠJ`Nx?hҩۥ9|m `WoQWatͻս9O+:Uº)u"aADDDD,yy [̋)О'uñDmdmr#Z‹^+-hK$+Q1"""""" /"""""" /""""""""""""""""""""""" /"""""" /"""""" /"""""""""""pتDDDDd22 C""""GQxΊmLDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD^DDDDDD^DDDDDDEDDDDDDEDDDDDDEDDDDD*3L*ǟa*Qp~ۘ((‹‹‹(((‹%YݔvxKiRZױ>[ ՙ|ԒE 0]SQ9ȍ9ΝkmJûkRTDD2.'ۧ{NUp8tf뼞)Ǹ˓^8cupY,_naIoO>oUVaoRs^'=__ו‹}mن Y+]#<נ;vq5<[Y2GHֽv`FOLڈzlF0oo| y;S襙LR8O7צsMWH捨Z)j]tcx}qmt?,6Y!us?J݊J9ptĊ?4ۤkj/ۛȾвE[Form=4}Ԓ6}p*%f]ZҰR J5͖-2ߌlEԺ>ϼIq""AIthֆsAnwQx(uwgra>7DZ4ij]\ȝ,xHzQa1#S=}^3qct]qðztQ)w;cI02:lk,X캗O5vrɆa0;R H3 .je,fXW_¸~ژZ&9cfi{!/~cDY Hc 1~tԔʆFmXfّ3^K;nLh4a1 #eCO#g9czXгMʝe,ha4iX3_\Xa1 0M^|?9?åhcoaHc\]WVðnTXla1Nͬm88b[Qf~#U5QDQ >7V<f]ь3cF0ҌsNnX H7O`xWjK5 rX%;뽝BF+2[m]&ɣwwjyyLbEդmCoL]gh軉[b7v>=Wp}f6KE׾hVe!Sڧpsuw5h\#aYøC| cp XUDDIeٹ6DiBN67N7.nm^RKuҍ‹_ׯ *:g!-I̻XbH4c&b}/܆^}KP}^~6QDF[9l)%$Wl P≋7nWԍ(L91Csd,FeZ߿nK42o$`ܙ4 cooD 1~n&~0 gOKa'tuytONoXZo|*$28gu?7`H'73s}2='Du{,Dx^.fh꺂cδݙK?ˇma ƵEza3%31{7KY+,~{4b~wD@|6k2DZR2_dg,sebK܌\\qv!9)2bqp?e~~;L(,یMTsc6iѩ׮%#n{/"JwytlP0 ;כGz;ʵjA]Xy svJ~wئ M[Ha]݋Eɥz/ed\M.'DXvK~ѵ.C-i}b"-?`ZY[Lei76gǵ\L_#,hXf/b moߜ@.|po%zbOO29ʹҷ N7:3hPjޭPֹSv {1J1hK4AX27{mIX +vmgՕ qk1b/MWQDoz6? FIFOoO#cC)o}<) ea{/n4ECzYP AR5Kι#g>O>K ɗ7 e}`fh޾ϝ,X؇+;,h|fUDk̻ȓ&}CC:aVSyyd2$Q{ۘ(cr/ WX=e>9jQcODDDDvn;Qˋ((‹‹‹(((‹‹‹O9Sq ,vqx!:9XuaZ̻l*+6צ<³۴Z\Cj(< ^wť<Ͼ5&A608= ϮV0cy;ți9q}ȥgCoVX),x"" /"" :FɯSGv)O?hG.˖s3@D~g_X}톶|ר;QxOkEG=9n;|L]0*zVyeCS잳~~|W.,8Nt8ڶs79,R \ ; \ (ooү'QikğcgQE(X鷽m7c^g[\љ)ӻS -}u*sFtnL{mR8l>9[NWb/E~<[畻lSw_#|oy4έ|g/p:/|8e/}=/Gt>y8l?9[nDqd ):c' 386r1cƨD^1LNtbIH!>98RHIO")-ՂŚŰ`Z-f f[}IIOg{7\]q OgߌN>*n]Vs*-%Զ$^ɩعþz}gS3@T^A㺒*D!ӎyS Xg"{Fi+Ueټ8PwNDכ#~3[Ov>uze*t`JR⏧#j}FS{VErzv7}[m)HV(GwJ{t)8U?bϘl-j-wh_aоig"H=Me057l0(ldMa,jK. =4f(>k(h-?D{mFT^^꥘̶o4}٩ϸBQx{xk1s1HH#,"޸:z 6x"{'lm0l1`6`6etV,V V#_Zz )ɤ'ĵ0#IJ#>9l63<xCF䯢 9GH`CVm)=KV^ۄNΒ7 ߮"(rᮣmѴM&3{ϥ8VNo5J~1_%dQ5ẃiVU/=̛%Bp"H޾Y\ T , +"n Kȹ2p 4v-Sig>u>y+_m~sM/ӌ3BFfU9e|4XgTxjlmՖ@|XE5Wn*y$g_ZZ=z`e/k^3#Oq E#͒ʑ_`r!by< 䃳ۿɌn畐CLb9sщױ5 `z"g~<_X ?o͏&beʓ9蟻%hԥ!}}960xz _FGiF&\pA%oeſMiqg?9)}x)03`=ƸS'lg֍cSs|2$‰|伷݌$k"L% 7w2ӣ$ATD8F"55ѣGg<ΰR:vM|i|*5mMUȇCKshߡnUyRX& ɏhv+$E~Y}u bC(|d9+O]ǜkЩy:_{f GńiIXe ٛ)ktwc/%خğrֆ&v[X9Hu%keM~Zth0&u_\EO:T0P~+`2SCG [ņ IDAT;qw$uϤV.ZΜiULN8;lf ER&s z*vuSe{USy;}ĉYz{LF= _.Ι@x?ۡQGMJ2%$ 2*g#ccd)i<"#ٖ#s{}86'g зgk1򓝤ȰJȟSwIqme~6܊_"}HF>^+1˙ti;ؗȼ_0x6iQCs;x)^ޤG$Sow6$¿ { 6;6[>ofkiέDAzD `d]L)Z:ǎ0c"-2S?mSwo+~sV|4_ -yč'8~ф>۽pӜ?-sq!26Γҽx2`ЬonL&0[eL~d2idHJvpp 99YEş8~]>8x<s ]d$rIn_ƁצxPUgq[gq3NmLDET'2zhRRRpss#..FŐ!C^$Ct v?7 xm\k61Yl^\QŜ#4_ *KFD^DE>?<<<̙th}t9,g;J2S&_\BʇpplQF9sS@}NlJ_?&,*܅AQtk&u-)"m޼777MƥKY/_fԩfP|r-)i/CN}ue ?'5ϻ*<$H6٥>VȓJIP:%{gq#!LNDD%г)XzfLV5_TV"ـZ^aao+8v%rW&! 5OD9=Nl߇.@]TEDϊwNw{ M`QQdjyGnedKjp(RcGĔy.T/[ƭ{"<\ᜃ^}¡}d=#_Ixg߹i]>""ٖ+ te2z&c'NǯnQyJ~2,:u߀Ͱ(3[n-}sK"(&9d dLZD%t߇ ,Kf""ٕ[eUҧy_p>, H{Sz.*6s6$'`pU" /YWX%E>b جWHZ 3ӭErئ&MN/ЫEa\7GpV=߃nşr^fmI/Vl an܀iߋ 4y J̏rь]C7AٺgTA*q,3~; 5Z2! 3" `hiԎ1m a7} \,uqS/68<XgUR""E^gX!OߧwqWL$n~"4w"3Y$k~!aLv]p˓s=\xsF;g>C?;P\SUED | ̇뙽qO7 85~lbT$P pQ>3uXkwwۙEQ'|$,9 Toܓq:C{d`&Fڍ(ftb[1}"3.T;shreP Z߮Ud(g6HXY=̅oN`޺Zɓ\=o0ad62e®L{<5x}Rk d9J51\O41d/ۅef5hl=C*^uC [k_k~Rde9OJpd?b ¶ж D¶@P=/`w$P(d?ѫhZX2!%]޴ $WNr [׹ttY0@ ^q{g1.fD5{ }\[0ahIGrXɋ Itz_a*-c__<` `S6sioW6VqK>̻M_b̒ݥ 6f`~-<0KoH#DForuX6;OG^ZudPin>U!#̔*{VI=!(|s0S : cyÞ^.x #B/E4W00>J CQx3g?*ne9|?r`DogVG/)SIjVIg Zgx]V6T+ތ,KN3cH?f2eBL|?'ة܊;LfU_+=m^j ީ3٤(&3KC?Ԏh[lچpţ(6fF$Rlq3YH s8:(|MŊfH \3A\}Dn4]M>ѱrgM. . Ç?]x]DDD${zy?Hr/gR4.)iW3R9gKbFZQlt4R8[r^ʮrM+ք67uZbpn&u{ /q۸8H:¦ooڸ2&098␚B*<},_ߨBZt |9kwsЫ1O9yQhSk➘_/f-*.o1HR\0M'&ɃRa^뢙Ʉ{WzziN -WGF9|⛬$ʌB36=fu?^I$4c<Ǿfܻ{;s XxnTprXF83<sB3g\8ssѼ> #)CBØѫNUZ OJiIQj[1b~IFxa$N~NtՉ&~p~T+ܐbW/"""E 0F8-=dH5wfrGQgSV2Ʊ14 }@snA{_hO5`_TM6Q;L&$y\/sYI5 ]ﶍlmll?M . ǟ?ӠBCpt){mQD$ j׮L8Q!r@0ebM ?JhznBʇppk,jTζ c~ 5-v Oo5 p.$*d53f1c(Ĉ>>`ggP)GDDDرcݻv߸/aïPsC࣭+8 0SO+`A:}(njOD% )uvxII\Rz 3e7\iۍ&&$Ұb#f|2<-+_?_y &"ܹs <%KPXbϟOJJ L&ƌD'k$DZxfTH\ps8\3;8V/ۄqWX"ԭxxI?_]+7OY1R c9 >Fذk=9|x9V nɊsV?^9t^}U-ZcN}!551c0h $ vd yqGz١-tD2!x("quk (7İ>{zC .7+ٕwaXt'jիUpXt) ."4䂆O3[Z .Le2kHzVw-ߔY3a7|G=%"v19pc^"M  >EIph;}7kWW,[_Bqx n0e,m}P,W\=JhccCR,SOR9g8r?4o$^>Lx#+Ow'o ᫟?Mu9,k[_wYaD jKDDc{o+ `/2q)xg_zoCݎ,78ٻ˫[˔~Xm>*Tep<%(gߣճ >~*'Qx\?ʱk)_ށ!,g*wjDo-vnHPRS]x֝nsUCqBGy"%$$⢂N6f5,|}Z!3j_c7yӟ퇛MR?䱓o؁ ;ؕkMYs#QB ,;W xdz&c'NǯnQyJ~2,:u߀Ͱ(3[E<"##RENr̋˃`T-1M\|\pI1z9mٰs??j2܂;ҭRU4^*øD>Kaag0Gf{`=gCb{z0+Y}Ν[!k癍{-ja2|<=V<瞇q~j= =&لÈ!W! ahX+ J̏rь]Cǝ?:莶YҶmxEݸDD˳Xq5/"(Ssݘcq潛eʶXrqqU${H >zaT-X!OߧwqWL$np.fpvsxqX쭷ޢf͚TZU!" /ŗ{? 489(ʀ\/Ѐ%{fIDrޙ/X&x :7 *l"==W^y'NЫW/H6]Ƽdȗ+jJؑ㤥='zѮ[,Y fly p9)GÅ1ؘpwiݷ3u]:m94}^UIf᷿##}o9&|TzTwѩR3XeMx詽u؋\9GjZoǧ?9\@:p?;'RҒHIO/5=1sSaL'O0vx_2,3voŚkU;yɮ3X*d+VTxyВ~j=MBW.8ƕ\>Kn/Ep@)|\yN8ٻB~%{D5ND/8^ ,@^wㆌΖYeTcT[*d vRxy6o<>dٖmy{xd! (BW(%e(  L;qĎ4?(ؒ|~#,u_33Y}JQf&gɔ#Ɏp$ K &,(J791"vVm`1q؋{eFeFX[%kvȉART?$7~#9۟G?瞳yM!cϹ4ۯ . rܥU l($Ef[Dҫ"">2S& n*%bqD?L/>2{1+w~DaV_~_>?&00G>'}+O O|USmy|%N^"╎\ADD^z¶XaDV;7#ҽz)QY,s |=]o "24,f{rѐ7uFjкyv*\V#@7줦6)W;eeoRx"Q;o'jC)Wp՞9Ϋ([s6ڱ\ okRr }SշVѦIS1?W]h7C8jUފgqOxh7c&g> M46͍bc6Bh!xb潇f^ub{$ihp̰31?,&f 9w?}{p"!r Gfe=/Wv%]ϨcƓ`ŒR *\.v?u=-gxiakh[YI=7H ꤋ.w3oOZYlRX^͘f};ӟh+בKhP_6N]K9ߗ~5s;;\N:j/,Ȓ=:m̾{Ds]wQ.:7HcɧR0 Ȓ (higĭ3)Ȉ۹vvsT<oWR.r{cl]?VSÃܚ? O>Giq9)}A$3ً˱OKbɹjnKEX۹DyQŞ8n5Rߒ"""U/^̍7.<N&s]_ZEIVNwY>r^[fR}j?=_RTJBDqxRDdw E|l駟m]'d FSFP}Nu{=|qkᓊܷL;3?;v}qD{)˷(8ʋ(8{^?are ;ptڙ_Nu>-uѳol{Aj>|\c_K,"""eCJUQJU*SZFUoyxጿp9$eCJM? 9*o ̦=_`EԱtΖ~-WimJb4""""^J ZȟlyY+]|-,ⁿPY 7Z0LT4hrEDDD)l(h9eE3Z?h q"kDĶ<6F+"""Oᥢq71[˿eTT#U)%*a)Z;<6}_@8\zf&fϡQK}ko9 AmĊCxR,(⪢9EwV?`UEŶE|&UDDD 굛TcG }p<0u35h6|>-IQY,s^ 7EMCED'Kaf#31M>]=22m: qebj[ʉ&vGIC!M(`+6{l.fIf<~U~`iW#!"uPWV^6iOF$uAJBd*uu?:^ l-_ 9Hz|奮ٲg%s-P"G >e_6G訧^*"""᥸n.ʵ<`s: cx$W OaW]&TDDDKaf"|h;簼ClƀylmK6_)] \̓MNCDDV Κ+䱀AsD^pk67g];(/њL[plQT7Ն+șTsx2VRXPHa~!; ٹ}C΀\LB[@1 m5B4""""^Dl*wgM"*$VCC ^C[k;w`'; Bf39˘9圓K`Pmً^ķk` ,-y$\u4t|ypm&4{`VEDįKO]59`M7ϷsËq%`';QPHsc3rCf=9D"FQf/bhxMJroUcs nst^_]z&KwCa'c铅rtHƯ4bOz.]{; Uo5e]?f KbJWTH<[|ײK9|_fp: })dF4 /""NHkpOAc1d@H I_?WW[+)ܾXr:z&];$c Ǎ=; .⟰}߯:IC l߅KӨ ]>[ {s%C~18[IqEHcGpҙ M%Go1Rռ^:oGdm`3}vCkJLJ$""~^j[* "u6t%)QYwՊ kpB vr3 ~ r+}VK4- /#%_} o >""ᥪi߼MyN%鐱ác>!Frp'; d{XP-DNbP] ^'˿07xYpKpM14ußy/""᥶>YzdLR!cY^nnDQӴ[(ɘ7< ъ ';χxU5SodHx'.,uK/K̤'n ժIӰP+P]Kɑ/`0 OAEnM‚#|@ mUdSD-:=w$,8&QDDD7s^|@*'N1c{H`8Mz}^G{^DE/zB}@ =Z+/M˷+_YJ}[T QTmIА$"셍<]#kqC3Xijo U*=਻UDD~_ksG*̦@uJ/ud ǘ<f|"gY)o3iLi_Rʟ, auhJEDDD9ԵVa0ޅ: UxfQGYjOOҢD'M@ߨGkn#/ل-,^]mC[gTB*h+t -4//eĐmdF]YEfWka!"""^\n'FژՍww8DH2hADn+܏.?! ,@  B\h92s`Lďڟcg3mu<2bo 0apuxp"LR.vmLM>^] N7f,\A ^<.>"0@wi 7n-`se$ֆaߟ?ܤLl\PvF OxQ""" /+^:}Bňudƌ#V|gS?f'=2bG)qD1$~pOUQ&a3K'OB]OisƑy1o֮iC3g>R+ ?Ѫτif0r;Ͽc?Fx<еl@u+1L."""^ՓxiYVEL¢ sptUˠs_(5 3sJx1=.LߺrH`s:ŋt9;1{  #`==WW oˠrL^ϝ/""" /h4^̦@\ /ަqË-4l܍1'ۨҥ-DDDD s1t9;%^uW^stlSL! QCqa*珂ϋO71x>WLK‹C^ K&4r)XtЕvO^uPQp#+/8]N0Vq:NW7a4X4pw0e'\.A@ߨgV^ F\>WCN]eux<=xF±>r1#+/!t9}/EV.2 \e5L&z` $(L(=bK~G}&JDD/X|5SV.2UX-q=NnF~.gs&P|$\,0 /j%^Cy h!zJ8Ft7tpx ʋWȐ. /A /ިC^%{LԴHN&V6q vu z䰱p:}8Bvub6[@Dݛ[h&_?l𲎮fBu1/!ѴwdF#w7DJc0Cc(lxl!B+/""""}7G‹ņ' OIuDۻZ4<y'8ǼI;]*xYyhĎ5o / Hcol"<8J({w\ Z˓,еs둕I>mMD5-|nmM$Di;tSXz@+  1yًIRr.bSYLScok2O'|ֽrQz<>~g]jb8퍄ι.N(5 +&2+)UgϭO_O;skvl1Fa p( *Rx O2|HUUxe[*7=LcC#-<87K{q$M:C,nz/1ﹷ8fpR9+8. >m!NXowr՜#O#or";g+, (Ab7&a"%G尭bW##~Ow=z'[7mcXp\|ee^=֎FɚDx5'=#r+yuFnY;TL$#@1ʼGMaX _m/˳z(,DZ}3Y 2P޸KK؋ Dz9sX|M-^C|B O=~_wN;DMx?O]|<+K咑8]zlX~%b olzFo9I 8j^٩TڻZ|PYl)Uz˖$Fa{f.Q.

&hM<CcO灮1'L7?'Yi's'"""I@OY-Mk>Y!|S̘r~p X*^\"p-WE㇍Enl K[^Q"my!iSzk?aOqcxqɋ3UM4J*"""%#:{[5ݮN.^lx mm/Rt9NbI^;oÎ")"]tyQpm7Z>V?^Xf` m1>s&WDDDԉTƇ/~8o: kMʯXðԉ` /A$>߯ Ih7?.?^_VK !A~]2(ifS&YDDD lF\-rnۊk63.k&YDDDğKRD:m~`rg|"CHmkh":4T}sx6 ~Yz\}:uDĒ۸jӧEDDD/ᴶiioBE3!g..=>` cuL=p|p6r"""""} L;UءAyn~u%+1#'$YXyDDDD Iǧol!$wv'cG:֗Ic.$WgwG`Rq~jhQ6N76eᨈ‹Wxla`x.|n(bSxj]k)Qd d왩s)Jş/~(2ggWHI7^s=ˮhMeXT4l弱 #ɍJn0bsuCmK%۫6RX=$Fdż:~B4 ^wϊ H_ /fS eKJLj#ն76QXBq] %ΚD|x LBP`:t:ng%TQRCȊ)M{b=%"""">^g$tرGiQhnoZ+hrtwA@DF1[h21CZl*/hnoӺ_ɘdG;g'O!%ouk_bѴ^ nXw+CywgH`.yE_8(/?yo0OpӜHH /G>77,a][l+xRkw-g˚'H9+*]wSqëXpḒ[{>Ά3;/s#H㤂`"SϚͽ0I.[k-6&""F^PvTӌI^AcfJ1=>=t+^5tN<†01xf )7 ;qq͌?sRQ3p<|zI IiioyMbF`5XGU->j@ !J?C^;{4{rHU7#U +->yu9uŏo}Za3iu.8]ym,y.G%utZxjzXrɴ NJJ5T[.!1}Z o 40?aCDDDD^QKG#WqϷ8N2cc K 2$oKCk%j`61,e<` R3‹kf3ߗ~C{wiGdhmr]T5P\BunB OHN4""""Z:)L~x[X1$Ò6x;ԷVPZ HF!,(B+"""" /= ;UWv"Cbۈ %"$Р CDFG-vj  !:4D#m[~p(;hd;`$j)'X9'ksIE LSh<e701ӪMe:u&F* C-nqJ,h iTXtXML:com.adobe.xmp \sBIT|d IDATxu|׭70j6%ݠH(J`a" ;Fm~>]}? d2!""""""X DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDr35d+;A4O2Щ0XDDD!IDDr T ۰=Cej& ?3bz葼 1BBRBXkJ󠬳{luBg2n Hbi V/L|j K4DD,].PYm!""YK3I""b6n^v<ܰ5 E,ճ;B| lmq{OD̔fDDl(/V 0 ~ T5XkVNyhݸ0/=]?~Y{W($/'h1WDC,*UѺǰ'Z-$$""/U 9qk3\LxQHy ?nAU 6Ά̓-KlNAEL&""""{Y;Mj}'irOR "N3I""""ϺSI{QSIh&IDDD$r|<.z2XL$*J2NxZDDDDDDD,vQHQHQHQHQHQHP6jyXt" # !C1w8 b 8BP 6 ]LVD| 58'BR\n P UۈB<Ч"+ w>h$y( V 9-DD$T%QHHŮ' bCh$t֮wn'E'$""]B*JR[($HjIDDD$;q5Bj#b'"""P ΄D$WKHI?ͷ@Q7v Ijz\uDDDlxBjuX{mҿE!IDr3j}7BBBBBBBBBBBBBBBBBBBBBBB($($($($($($($($($($($($($($($($($IF3zyJZpJ_Jh*1ҡN-#OzsS=+}]2U@DDDD$="qq]bŲdDV)ݝqQƀ{QǾqϵm!kgY5DDDDD!IY˗o?R;T`շBdIFZnNtލ:7Vt_2FiK)b-c;+3%rdf/uI=ҾSwDbL~e|xiQؿ.-C:پ<߻;yW<Ŕ#ѮIUTbyzI.AFL;{/>ba|>Ny>k6HQ{-c+kWc(Qz3?}&7el֖.ҿj´`'"""" I9!"qq>p[ g$cM@|ݿ6_Z<[зM͘r6'ehbns!9COe]9߂xuaB֍js_Y|ȾYq ;8^/W0< +p|^e$"""" I.`״CZwOʥOrx1@slmӡ7p7_u8 iݾ==ꡃoȦ\KZVxDk+Xvᙚ.\*h_ »ekJ]+WkvB {p3J`gy7] W ,͋Iw,cmj#:78kORG""""xHe.6tmbxKMv;l)~XA۲9؅v@fFwx)XݵIUR=RO ؽAvIЀNacp#h{vF#c7EO緂L@8T ;tbc1Ɍf@~c"N;q5DDDD?f.z*w,&-eMN6Py8NȺ`q|C( {s .{#@2!ۃPH=a)+ՎU>ǖݻS|J6ܽbo23LkM, s|54DDDD?7n85K,~9rP +xg&N^ʁ# >dl-FϹ#FrO͡OŨqu3npO'P7Mf3>s[0}Ϝ?Z?ב9kmȡD:T5-aؔ-R'cdK>}]}qZx3gB,XZkSFu+a9jNJdb)Μ>FMEWȇ&&WD|};sw gKn/!gH x-:?odg?%I7%5Y8CMADDDߍrd2 bĄdҧFS2z"?:ͧuv fQ;QvbBˬgl~&DDDDӾH~^ʕ.Xe w;A75($Ind{ki DDDDDDDDDDDDDDDDDDDDDDDDDDDDDD#NDDh$,,s  66HRRILJ"%%dlm찷ޞߎOOOD(^8ŋF$1aaa:tÇs`!"#9xxż)/s艻s~k;lmHIM&%5$RSSML|D-.k{Rq=ߣjy\@mu0PZE*V@ŊU爈Bd~vevn%rb(ESdS.9(bv89wA<5".r!ۺ9<G'ߨ.u֥f͚<QH'ի9MRum Rz|ϖ_1VF/O<}hPގA)"EhשZXb\\`2Ljk*F ޮ]%^SA_4c۳TS1yr ;=+}ONwI[HHgu.Lo Y[YSXKPf[?k+kkEr^=Α<Ggѣ>>> "" I""͞=wW~6Y_G=ˬCSA^=ί h?. ~Eu!":I""Hϟgİ8;rZBu@(_a,=xDjeΧqsʕ+0""9fDDwf; ;{VUhm}E#rDU l@%İlLzv@3+j($HNa;بDz7ͻf`ze -#ɝ-^^g<۩/^E ΄1ԫWOJDD!)g%ٶmGS4K{%ʛi^W{'794>L5DD*.ĉ <CJw˼w\R954ff;=Mdɒt""B'neyeh?ݍ6%.g'o7jw#Ttkߛ QHsGDž)Yqk/[+GHMK5mްR[BYlllo5EDDD$w]LV]`t5BdTjU}c600E]ݝ7BaW>=4nT ""$""e/ &ӵP.k9Z;N ө 6oڬQH'1i4ƍ=_P1ٗEeA5J5aL˄q "" I""_j֞[g1tŔ_d a;4MgJMj=̀cy a1,^X+Bdߧ{ㆴlׁmk\iijlm~6ssĐXy#]*̶bh'vk^̞#iD:&I-eXX{jlWY6o^ch"9{(%Ϻٺu1yjm} ) 9߿z=Khݭ!/CD$h&I-g%B$}88=zIy, @:Ά:߷)?Ėu@grߥYO?L+Bdӭ_C$^$&1酌a!|ZaDD2Ip); GT.X\5^V۵/vŰR|h;E-բ"aBV.,Z ; %/б ZOu}͛Ld$ɿg`|a;֮sRنԔ4HHLQ#T ˛9luKhhV$$rEI$l;zA[tl|ra}t}>,&ҵelN9IJJQ&,zdw7&OZ0˖tYZhOJJV('c/L61 '\Zٔ/OfKL6m}ScwYKAIII1Hmq3*}`W,ܡW{r5Jo+dڋ֘LFp"nA틈d5+[@o +G]2o=3c}1ˀ?|1WkA&"$"bަ7fs,%4o4V᧱̾K5&MA&"$"b8 e5},帤I?HT bVY0w)ǎ@QH1?tb K$?I>jB`J ŵ{ҾM' 4$WRw1RƷ*!WbveKMK͛i1NZd+XU0z 6${n.GjQޯGÂͪLGi&+l""FM "5ڋר!h^vvl {NB N, wK%>!g'W\\qrt"eqɏWY ߫,˴j>Y{L1<Ȯ}~BH64a*O@%{j'q~ݽRe)S,*r6<+... @ll,8pp9s'h^3]|UJ5eq;%O6/^=*O1r#bk ]LVr2_TT~>%4=R=-ۡuԩYmZЮ}3t9Nbլ]]{vPR J˚=S ږW~cNDEGA)FyIdC^xq6|zj/ZT,#NQ tҼkl۹;y:qH.s*>]V6Oa@(_yrU@:u%UwkժՋٻw/^/2TJ ӷ5%ܬN~L upܒ+WҶР IDATm[""wi&q $ɁojpKV9wRVj]_IEsE}?]1<bo5jduظq#CbANӝI^U,]gxv=wZ+]I$ - J)&PmQjM*vعݴy(x94i‚%s(]7?<^'G֌xkĒ? ^v,X@+]Iz\i&IrRmki)bt+w8tWXL"ErT.^HǶ]^%/K<6/?&:>PFze_/ry"h&I4$k 뻟VZHE-s|@z~ZM*6f|||8pdedJfq2\-`L\ZEDDr`E$L./|;ҍs?!s|"3z\~.kj 0r#2iYލ0aD"" I"[@2h)#T||rdlYF_>UM9 6}ͧ# " õYPG?2[+VhE$5H{+c{ShRs|t:aBpttu}ūa,?cvgڙ9~ԛ?J  Zլep4}3G+zNH.fme]R)S%m9V~^y5;JD(OW.Ս2ۊ@$€6keu *^iK/rA*WZDr-q ۉƟ:N"hV[}sy?~N3{R֣9OW{.GkUl.ߩ%Dۉd7o}~90GYd'bֳnU6hO$$$ED!ɢĨ'EcM|KrTv\Á뫘ui NQzf͚$pmnUƚdY}C\8χˆ|ucY3SzSAչ"d|}o ߴ+knFA7<}sLM(˩3'Թ!'nٜQW('[Wۧ$6nݭڀi!ϯ"EtkȮvjx{Z`3l탹ji]OO\yŅ7Z9N-*b\QHhл+,e0k|ZƻW## wgɮvz7roClh#yF\v:|6sFHl(EժU/լUvsD}ZɼԱ"d? ؒu 锈5Ly?G<ޅ@7:-\{ $|7C:+a_gG7Ɇ Ա"dΞ!CV-ps[:'fj\s*!i^\Έe)TK:nF8A뒅) VocbVz>>S> u廝 ~`P0..e0@\{i~XFJsD]^#+V/Zvcϳ N$gԨwCl]?1y4pe?[jf,׃/ aʺe[o/xЦ!4ZF]`s`}}GG ӏp"kM`2P==\BH5AC 2I+£6YJʝ,`T>B 1v;pץA,_\*" IfuhBض 7DΟׯ[ߠzI#eT'V6 93LM~(9z%C0כ9Up,lB3_(j;ݼ!f<` P($8`Q Ҋ*k[Ǿ[p+bŀթdȰpL]<)϶mԩ"dOE+Cp0ԫ>Qlj(e cM"Luc(8sbK=!2Y } 6%B 0;=|jxC欼}':6'+("!C!?4u-v}OͺU Rf+Rjk̷T-QBٴ):utG?/ GYꅳzn[;X?)5QrL3gWrHL2V-w3p(ásC8 V~ela) p)])mx+|LW,^Źp.\PBYټ9 7LNLϗdԬۂv\+b [ ^Z~D'bg#Zcۖ5n,! <.r0*A/ HO$B-f sZ@p @pΘc׉uz`S_c˃,afɟ??c)]e""dӽժ~6_~-xL gN tF<ɂgY7leT[&%2:++],3%''U(߿auXwGVա+ \3ۨr3˸Itt>>~< 7Z%XnES$-=례С#K}E/ZL/\?Mc]=NHL($gH.!"b9BLL4v[[>fӶ,yq; I7kc^=Nx+&;sΞ=K%ԡ"d-XP=&"n`|kYl;J"(ή"[@'{gDM~poUDD!l;&`PH۹m7Zl1VGfW΄>е]=FBR3ChQYpqv#.]ի:SDDDA{ӥYQt,7VGf͛ F~w y7C z7C J.GC .GA"O5`HQHR%--]Ց٤Kz`6^qc]`ޢx#MtŶ-T*S^Xen޸Njj*66|$رc$&&[l7]#g#3uap6$%$?!k+0]%sAxTZU)" I""ȁ.ke="Qq7^:2Ԯ]}4c*GvkwOqC*u;wgO(e8yB($ȟ>WE}EtFe.wq[xS ׬^,_zHQH?8qu[dؑY0S&=ŕx2l::QDr<]j]D_1eۭ۱5kY5شoE߫,DQHt>#"YN3IiР 4x3n85p."Xd I|ӰaCEPb\t)sޣ_&I?D]{aȓ':Lr"ZID5/CɒA$" I""-:2gEﯙ$sSen3I  mtك{];L~]IDDD'ΞW4C ~~I27~~~lqvs·1ocUfex\=$_~ǫ~gNpWR~<)1ǁ}Oǩ#[\/^ ^hf-$;'^qox/,mݚmKΜuy`AXΝ_ҦVl&tzWHa;ܨBm<$RRrI!2`k"KQjNB);CcȄrؓo;]Ф DGêUp9s$OH ;_" I""Y$BCRBb<@3LBb|l/swę9| as(YG 嶵#99w?,]~{M ucǔtWvͺq֝d)%5k Ȋ dLu掿+isx{k;RRX{a)&я+իnfM)$vdtFpL |~<) "ԙ$$팘I#IWFt&k /)Yx wGߟ?׮Ϣ;1qd yۢ0/5Km\/$b2aȔ//C2Ig0z[>&S8f8:`_6m_%^6?0mlIMKBB9%x %m(z^RԴd $9֭\7vlmp3"_Dnga i@8 )N t?k?ŗ?/|q VĽsSpF3[q`BAr'us[7m܀g+|XDgxQ*$.iCjCLrIvLS,"0̷33N[J3{x]2 bskwj|/-`S]LWش?_Tk a=9#`J8stu!i.o><({9FQ~L]mLqWH2GqqqqȀ$ ؈Qʼn+Bs6`=]^sI)ɞZ8o:д'ٙg}mtȓfjcӕQ݃ɣ0c< gB12-71ѡ!ﺝPw2q.g!tT[vgjD(W)H~׿dE\eM{7RGkR.M!4컊I/,g\LW c i]=K\֋1j"&\Hqm%\+K?#5gª8yEhstp[/{5?9/W0{uUʊ4?o,ԙB `qT`@^F$ Ȁt;nR<C/dʆrR xRf37_OzWw uf; 稝+=OVrsf]*W`_oRN5,8Pr9*#: sОMw_`}  ^.Pk|-dG .slٻ_k3n2ZU"Ho[nOQ?t9.dXtWz>X@D·曭q7pʛ7{GpvtdZ<}Qz[w]So~`p+!"YTYaùsS9wֆ 8EMK.|ƅ,*wL|v'e}'`5yP&:1Ak(\qq|];0#"s˽+WlSǥӃimGQl$MҮeʷNU"1ɷO!tTK_)GtP@O~ JvϽcx|y9$a0`2iJ74LNjV|5w^B4VLXX^W[#̧|wɷa{>x1 k.2?EBm%$ wWOoFXaѪP[ƾ3UL3 D5{k `O 1r= b;q;= XLv6`*ɷIRCA[3&+|7%_FC#0d®$*Íp 4]a)79e}1f Tk^N':#Nc2j͂ sMʗ// Q<9}f9{: ,ܱț/:04$[ҒCRR{_#OچQjԐQL}*+$[k5 䮙˭ۖxzV3IfΆQ$$%DBRf1h7$l$s38|9}j]}lH n&>/N* cX=g]dF,ׅq'1z ,bnSyL F-ƅkwM"9wA!ܜ`!)!%}3ɵ"` ?^bێg@뱖ٻ$S}x,AD#;H"1B٢jUk~j[TE)QjMA9B|<䜜s\y>qn,_u6B>KuGseS?&o.$U)Gz@Uc$fb}^ IDATj͵Ԛݲ J """_)c߶ lA4o泀y>.5eapj# I""ZR~ͶW /@!.WȟwwEY>0oxc$~Jz%tX2[`axfQ((BH"E 2q!l2MY8 Yh m|3&kSk F|.箽#GjN*Gh^U-D!ID$٪Pd(&.cFi2fuJcNk>A "L[ RۉBHUͯ{Ol7}ŋa65 ,::+V7a~06R[ǰjj+}jԨ#UQHJr>cL&NI8}zڐjCS foMz&rȧ :gwzRHr4 ~ d/y[%VgϞ[,Mևn>w3jP7*\#ςMxI'hPnvB\@Q05Pۧ$/_>r %G_ЫЄo7M6_{ Wn#_|/ ($$>%J{ }Ci@_sIoRȣ$Ά/&GU/s/k+(]TDDD5 i֣Km=3nWl_38qA)%kȪ.T֍ Oov|) 6ЀBիs>"߷~)0;|9EL7,hW>{c̻˹x$ޝL׍7N~{r|}}5" h}q^䪯/kռ-ST#bKY8r3gW^Lv>$lH hR=".vsс$LТu(gnH}q A 5QP@C>&޴{,zֵ?do'MBC4"$""ϯN:,<_P,^Y};fegc(^yt?mgb`$yq[fYI/]?c洹ٽGoKi?Ϣm۶888hpED!IDD^0$mŏK2njTIdd9EDDoO2}Z;Zi$yq*T^m.'E$ӟ a'D$y9{ƃ LdIߐԯH/V3-iEtWDDDu؞o|Tɫr4iѸ~3oN5T/N5"$""//M44hАUf&~݂*(U@ jF,A|74k֜)SjED!IDD=cII_ J܌2❓CډBs7sRa$̪ȳɕ//KA)pwHV N׳)X(?%KZDDwwf|TDԨh(IжcKfm$V$g9|y d7'g<4-۟Io7ۚ1QHy5jGv2IT~?tX[fvؙlumdw7Q"OkGv AAAz"BJ4X.̣{X;.q]}Z;+F 2%ygH!&|sIe #[$ϙ1h@=ED@ ?aJ__Rk\j>Ny 5sg߁p ;M!,[= " 0stl[?ҽX5c-Lڈ\voՂ ~I]\~NHS4ysm1k?sn`Q\~cr\j1}Q:~5hb-ZkժNo&$ϯ{۔.Q .$={pr=ؚf͚ ,"ػƴǸ=bkRQQyR*G@X/E +hp2S7"N-sX^:0Qm 6@jg78fܯLX̛2aؑ^?/dՕfLTxSh4/AsF}>3A*g#Loa״gW>WD$JLͨU3lH肫o>8qa܌ǵZ:>DBxd]Oӎi<Ȱ0]dbPb딛ñrSez~9*:#`"ob>9w.Ǹ|k֐&]u"eJ;fA-Z0/݁ $Ͳ3ye_㻭]Meڵ|;e6i,|&Ygh=qED ;{b1#b5#o(#fح|ݥ1zymܝ<(>6!Q_ s&{cb<_ng%KҺg9Z5?#ij n^p;e4vg͌sn̘׈T+},8ج1cG2qxFv\,5sN^>~}g|>3*T`m߹s'ǩ0ً%t>;ROXVI}d8ltaBnɛ&p4a.-f*+XЋcKp*`Ւm<;&V쉌[ڸyPsޛSp58R:4<{W5}7ҾfG=GM)I(}?XBu A ~=+GIwe^4nŽ;m{mFP`wV>al5* rz_;{r! jωu\""IIyC6m݀dbTXF9U1DD^fDDޠ͛7kj]BDؼy !"$"b;WJMZчƵQ1DDDDl9b+vR]oU $=ɤ"HZt@⩂K;{oU aBGJI/of>bK<'cDD n'"H8::|b -bKi0$k֯"" I""IG=[:b 8" %w*BHC=m!ϥ5`аTPAQHIB4iC7Ъs#4V1DDDDPRF/~OŐ4rafuۖ*BHЭgW*-$|뗠k*BH J>EMcCuk7$I]wbJ لam XoO(QKI I˕]:OU1c+ѵO+$$44lٲШy:$# e9TXQI`I1U|5U a'T$쵣]-;6( &IAб}x.NSL% q:'c2TD3I""6ёgL] L^яCwWs$$yKV'GT;:Eqq8:#__OOfDDlfDDmÇ3|HO4팫KF/DDf֩ |,^5///FDD!IDDbŊ1ww1a$B۳ *v\*O,=}'Щs'LA,YT$d[i@SEDE1i ;_-nNœU N^>svnӵ{g5jC)V@ClfD=sœ !M11JT5nܘƍsfϞç3[8;S*/=J$`tVoiiѪL]F9R$aI緸)+WBUw1l ~399vk֬a7qm*{S?`h}a5KBt?e槓Wzj-Zͥ 0C|>_i3%v?Ł[vߎB5ߟlٲip@D!IA8؃S?j'I`޽l߾s)QYJPУ$![gInܰE$m<'G &%[`o ضvn.,L!IDDD^Yث r3sn +a=qRD;DD!ID)PM/p?4_a_H{ }zuBD^9Nqf~ h0=!@V IDAT; miMe#RP}u) Dテ' mZhB?=C׵ͩPm7pu$\IN]6H+gej=OzS5dl"""̓2e(^ QMD$AL7|̕1 nOvW~=l6YWoIDK]X4ֹ^GФ} ED$Hz=DD!܅B@p( `h[5":msq/1%yÎm`PlvJ5 "b^"""8C *mp<г& ~qRQ։($<>Wubš/=Q Ĕa։($=7ǒP쇋"810 y+"b۴QHz~ ێ@%/|KTӶNDDD!鹙/[Pٺ>pGhe4"b㴭QHz^WTRo^F "Iu"""IC>c) }෣Ҧ.PA:~:WXʋHuAqc04T"""o>$بǏaZر˗uxPr2ZC ClHT% ŊC*" I"? ӧCd!" \ZuC0 N|Bꐾ}Pz׆P 8BPf\Hd`/AO>ՅzMu(T q~VC!_8fSUD$rP D$ɉѣÏ}C88g|yml8c_-^OLu%(̇)lSٚHp\'zp!~^GP?74:m[J]> \QHy0mNMt- Xo݀"W]Ҹ Hm.T =ޜ>E> F>]fv_OUDLۉH{АW3`H: `ALϴ|},硛y~C7_,3]+n <ӬH T"`¡,jƛ_ l Ak)AP(~ӠM {4 ^'g@P@pm¸}@4,Y}RA平<@:0 9*'ZՅXmB.t5* #lo‹?[OeBYOr:ׅ6!^f hmCsĀuLh ! naU>k0$7v"D`> !U5y`?FDD^=$((0z1ۭc6FN{]g1ƩŸQa^ax1{'~Fzqb|nadjeQa1a<̰Q'870Rf3׬O5e Haц1a8a߃0ޱ^>60r41?w5vaNmh0l%3ƴK}c>i70״7 F|} @'?0%afaTXi;ƕXÈXaεiFL1aq1aa]%0d7As0d5ijsa+b|23 #pS,`?ݷ^0JJ% c0f}\Yq7Q~Ln'ķa-fZo>kSFwֶ05=ݵNY~HҠ$I: KBJMP |ևFÙCp8~ZsZ;3%@~dbU o ~'7q,ZP[@xUz`=p[MwA`U0JfXZ¼P^X7;@@jmdzlrpG`^hW(nnqJp6CX]6Xp Át\Aa7,h.y _*k!5?]HFT`mV]IӻͰWZ+.Po_e~i2srnu v6.-`("" I""(gր1] ,3~tw G֐aX49P= V9 z%MYk~=1n{`7aĈHI2[dg1 s^`f`^'}HOQ,`pȜ|7-XÉ6<`A8R6d/2 DA)8C!"h ,"IC ~.] N^>\fwz8e%|t?N<K[3 +[>~, DƇиi,d ?êШ5e 0"`Ӂ(#p;>YX5C+ u=Xy8g`hQJ  vS/8Rom4!<`z櫰tJ{xgGY@recXo iyS4$"w<`g6v,&gAYG/s}/[[<]'wvafޫ?=ۦ,'?BE-0-l=Cа&, @Z7O/3~1gp2Ԝm=-(SGy!HrB F/fhvM`D  -,@pÂ^._.ab3>3aRn6ig0j YW}ғ6 7ķ#(:/ $&,p>:} ~n`Ì\ж`t aD{<ޝz@m|\:„ . i!w_v(ܼ SD$!I쭌a蘧(r Ԁ}pp4e -h- J~L$=v9aiADDD^J """""$fmzAϞ͛<{ZUu{^T)kʖ-z}QHA3f@qADDD)UZ_c-SMD#GOu:FD@vO^J`dI^_Ӳep7tI*tҥf̰#WjUik0j($$z3fXg\]U IO$IIDDDD!IDDDDDD!IDDDDDD!IDd.""oFD,_nNA$!:LnOڶ!4$"""""$"""""$"""""$""""""tO5QHIdtT;y6oV D^vQHQHQHQHزzۉ$$z܉țP^D^fDDytG2x$(8Z~Tk؋`5e10cP)w~g[m^u)CkVFkuQHO"[<4-v@,YU70+7-J겴/'qvLe[̪ͫJ#zfp9Ce{ k? $ed1`GW!w*6p*($ɫ$y/94,m<5ߪAPPkƛEqlF'кM(uB: cږvMR;c^a0?reHEHYs]bni*Gޱ:i>4OTq?~L')! ?,5:l#7,Ȟ0eHC*ʄs,4T,t`GВ! 1s lDӶiۼ{L+fZ6NE/†[-6_eTӄjSL='" y.fjX/Ynu1ʍ8a=}}F{7g0`n>;0c#9nZ "#4[1nWw6Bƞ5̖0cnF!QaM1 #`cX\VYczeb 0vZk[ x1dFδK0gFW22]eD2K:`.1JV05Le>5cfg n Lg̸kX1_]o^wŰn]/iLhcmGw#u!އaXkkdoԸk1 FLQafF Í#цaXZ6J >`hM6u^wV$%~x(BhOl aaxUBd8Gu,p]Hcc\}ły1]5x+&wGL_172nr(nSFj4 HUFٶdd/Nbe:#߃ss5(j)+[N1I}_ǟ 2?Nﷰ'wJLָՋ,9MFx: ,K8UM^͛'y!jMv" 3 \@}FE­l-7v3Gk^'K^YΉ($H"fҲuog\5?gO{j.UTi衭YD38{;Sxv#9}FvD (gZyҷb ]YL38~?Ɠyfʽ;i{>`-71dlS2:$ڷփ( $GGXpP%c@DO;˜Jh_mb={-~ْJ?靵^Dc$""IUժzy N$L܇ŗb@9fVMDDL-,$%E8}#""fi&IDDDDDD!IDDDDDiw;[DDD䟼ډ($S!"zʝo׷p\]'ng4$"""` ;mBvw.\ަm[$$yC >Eģ}t81x(>b1c6̿v&{HʍhRp!SR;1ufR852-Iߐ!P̘a A?Yp>?M 4 $ofDF?Eؽ\wyɍWH2=S%E:R䐒i\ɖ'888bgd=v&ÂbbXOqDEDŽ=y幤䟢ȘJ[a>m$س4plF.8]z`4$$ !D1SG*9xv $ث >nA8 cCVF:g L&q$m7VD_{]qcx4(@y T $!Yь%sF>h lK9pٵA_Ft}هYsgIWl_L^gJ 08\.Qng%<d:Ne$ ?~<|ᇸʘh^?~\*āÍ> 7O*[qa1]'"q=F`P@MV Gb7XP2s#CB|5OitKJ+ 8-3bt^ IL8cV՘`1fyMҚ5kXb|$!.jYd K,aR!j((``. xzyi$5)O񅣱y75`R]swfUc.ӟH'Xτ;;J@M;ip">˨nnf.]W_}%=T_3zh, ]wTH;v vGik%`ŇX2vx^Y̾=zCKy:wŠ1,>ꄟg5y:>Uϭl(toLZ73_D1eb"YgP{; d[ymY<}f#(;̧OϞZnXˬGU'ox b$GlI+jz6&0ܒs7ڐdsfchDW8i/sDzi5|7u >ccZafU$`x:c>[{aq(E!Zt>H;-x_x=| ,t'_|22 ֧x\ 9zy,& o!i]Vi2t:V^ĉBD^Y&|AWA)#f$0pXjSxk0e&E NbAVZ%, hh0ld71C탷_Z\KΫk@՛~zѱߌ|v&wEDଆIki/_Nee% 'e2d|Rv 9$_Lr؀ЩkG2*~ڬ6KwW ر1ZIx~Ŝg/`p=hVP`8qf3uEeذ2+Py  P8\SM 5|} zsÀrvm‹O&H ԞӻA퇫Kf3(!~5*PǬ)=1aY!߁=>Yvmh`ucwXn/Ұ7F۠r0\ "fOl7To/=<ㄨo̚5K*Sz› G+**j$﷿™NpԼ 3gfziI)K..ldhQuuaĢy1j-0,d2o< HBԡ)S0gٟT@HhXNjvyqvڝkU@m5'V/uM kNK@۠p&qdCk Qϟς "$lRR, +g,`Sf.[_ǪӖ:o~ٙxzy^6‡Ⱦԭr41`Dԃ7؎dr! Qٳ'iiiR ҊRhҧIw_}?̋o,? ^y:ޱ[iOy[Q%_8$G6oLv$ QZhAdd$۶mʨgDnyv Hb;q| <Ʉ1 ds8m0M@B>% `p_.T!i„ һ''NdӦMRhu@^,U(6UyK}yYd}<ܯrtMwЭW7&?1η&+Gr28.>z~-d:tRz%=KzҳgO-[&QO9y3"6׉⺤J~^NC(?QXzJI ONel{\Pv$@GɡIԃ`_:V%88Xze˖(JRRR2XnYk|F6#>7 ?VAn=@!8 5ut CVoYEn}_>"O$G<{"䶎B[ BB۷ocƌ^%D=3f 퓊CfO~5ۉC 1d-Hooz="odΣO?i>> V״{j5`R!N>O!XB;x ңg<(@ԥ>E677ۑ# x)0#}ևzVm^(ӵPNRB IHr DG o$$$HEԑޢmpO\߇<*rбzNCxj0~ ^Q'OߩZէQ~r`9rpzlLJ+BLL(!Y۶mh4Ruĵh= 6D4:׉C}]%Ww%-) d4+iGocl'hg (jv$A8 PB៷Ʌݻw*W= QߴZ-wBIy'8}]u=tm{-_ Xrib # yMcgU-zr* Oˁ$= $$]r!^L* Xf6[EC-^ǛLe]O<= ݋-'WkϨY59@ RBu[ U&w8šG Q܈zpB +#z|&-1w?Ǣ|K?#o6& md5M:kegIOUвFm&I9ޮ yG$q^؆lw?/m_}YCйx\dNKwN7jBUr#i(qzf< x7nh^r2 !$!?+C\0?$IoxbE3~c[ }HGB IBa_~89|ۣsvvڑpvW65qCsC֯'!m1at6!$!8<:KeN8r])91a,Y}{8t %!09لbB@V="V٬D\k1Yj(,d\CS53N*gT* zN*g]yP\m΂yr$p箭ae tB!!I!3?7O䖦]Jum N㏯{@<[Q|fյT~1UYJvII)(ϡw4NZ:/͏'$̢dZxEڿ# mjp~~lsr:(NK!VܬqrRgMsv!}tiI:BHHBr nuP\ǹx+r}"{ E][=#`I=3>6~iO7aDG2`zu D]c֋c'p( X\n6_7n #2}r !$!h8/"&g$dB@CP߁( ?q u ҿA'_|>zf%1zh^ddleso|4+}C%yPK HC1|N xmd&BHHB څivSr̽ӝ{)my5-&(]g֓ V'EK<0nx7~ 6&uVZ{B!!I!}8;0>p2sjFt@v]pn{T\5+Iw3:㌾A~]"AY|%ag8ڣWX)?^ً[OmAڡS{W s.^NyVcOp[jA_l~9>9:y>YACP&&u6 uGxῃHNƟj&<|5VQz;.ad2bU}kvpD'+ !$$ !CNoO(L߅R״Iw*{gwNJ~<DF {" LLV?~||*.V3xjSڗ{;&Q}Wb,b -T?ȓ |z'%cX6.gcLK;L3䖗~(W'WC>;'[v͘ʚwXx S*o9>yi$*c/{vQA,hfBQ~9 C$Zg}ׂ,]LW8tpb|bu"Al8)%յX`--.ĖSZm9;>\–E|nb77yQB#He{+5Y7('=f.5k{spn~z m@Ճ8j =|)Sp-pS8vʇ& !/yBz5[njny2c/l;}Tȡ ›+R_3IπA vQP[N3:߀wdU(_ G~Ӂ}l&)$/ߊPpEGG (~}wOw[! ǚO¼ɳnCFBBBԃ_=WIĶ[$Ջѭx7Vr}-NmfKRE>&s^fRcZ^aԆ/Ϙ9IN~{'DQ)(kE.EZ‚. FVJJQxsD>xJ).ť(|}VNFLF>.ٓvBQD5+*曃rS; SÐ/Sצ~ j}a@;6&cWM0,d>{x{?]oA nٍ4Id/+o f-YlT@TjjXUЛkiLض9;&Ss*iB$ !D:y^Qݛepvf [ZٔW>3ql$f,;CU{<5ab>L;㓱q͵Bx^OZVYvVE/xwA[IM̚5aϽO7ϳp]mH@K̿3%g;o }5"GquÌ}ze_cxnE(ȳcQae&Z:ytYѬ)VKy~%UP$DxO>!}1Ҏ"ܣ>Yoo#f,5~ÌQ^CC !D9Wx$RziK}@} EgH/Llr@QE%B4kmkkńv礍DulrQHEU;#|.)#t? +g,`Sf.[_ǪӖ:|+M",d("嵏YuV'Y0k욧$!];pv;Mk),EYTPAN:ۦ9g޽u =&mbUA=8tn4BB]0%-&KAO&?dtҸ1qs6*ˮx[3_2c5Xq=؛4BB]}?}ZZLaw&qkR~o:<#{,O, (B IDAT?zW*u8B&ZϞv@6Bl{zhA[3z3 gmBTtk#0d 3sm8rrl߃d!$!htg 1Y)0h}1O%g h(e(-)0i"M$=䮉wZSgH/J-!I!!I!EZ)3BƖo=$L&$t_%'RaH[s뉌GFQ2Zv,%%,Z_Lʷm vEBHHB%(1Y2ʫ 󉖆l`mT֔RΏKI#2:1QDEfܽc" (Slj:_Ҷm3'W>}@kk^ǵm]qIi1Uѡw18d3*(N Jza"Q) O$ (bZ3!<MjޱZwOB@hC>pPufL/甋{e 2[Ld=:+{~Y&- ҈v</*ŃmH۶٦m׏KIı*"$$!hJ* :lZzF-[_p! @qwW Ksvd` ؐ ҁ yUж enR\/Uɂ B8YR!.Qie!Zc"c6^҈v"̻5,,Ń֭pտ|HMR@\;~-`R<O_IFjB܈HBBB4QŕhVUFZ>3蕠PA5ZD5Cڜ(kU1.O7J/%$SXr=! /󘫩`.8 $!hMhCum8dm ,6j@ ~ a(~yM)[i\O>~K\l!ꥸ8,h_~x\ah5"9$MXQE8%x|EX,b }]E#&Bmd/La|&x H( ;K*apK$|0&m}pK Vk>MX9*; \_B\#ㅤuҀ0Vr> ߝSrbۍP{ ψ#kb-5$ 53?#9j/6EygN ڊj i^8ԝקQWz*PXФ~=q!ӿIՅOpᷧ#㮇ONK`7 Mep*xi6?uS# 7L maE !!I!?Z#]SQ]OОXjtdR`äP՜=(@Yob7ʡ/~G ncO9#0m:|8ٳS`qf:Q`Oj"TNZP۷n:tŤڹ ]=`K[C{(CRMq h c`\X4R0_(]`xÉZBHHB\8;iGО䢱J7j[QVVTkj(BaRZՅq6i9~N"o+D(XΏ ) TrF^~jlo/~7Wu:6,ETX?s2(\U(@ X!. >I4ᥝp$>7b=chv$!DyeY8gKeUŨro?,Ϩ,`JHyrWj5h>Eґq.$<o'9?LJpV4ZW"`[FQ\ V啶,c4*&⢤v[Pi#c-,XRm!p$l/*+4j& ^8 at xa$V3J$V'g½(Z0'3z5ꃇpIS%W&23/^T)gxybiՍ➋pݑnO>a~(-&U?2VDn ROp) 6+0AZ,*kQVä0y\3֟cF˅cuY!mY޷imYA{( *F[Qe6 &A tK-T:N+>eaK(FbIHB[HP*T,5y-ë7MS&ۙ›*Ȯ؇3OwVz>@QCuY|G߁G:)Aw7j,J5EkIUX\(? >'3QjQˬ`I7vB+ /Bd:Z%Aq0=JNOApl<1fkj|aLx8_IJ]ߣTǿؾ;T@x-fp/ny7qP@׫W/t)ڤ* . !ǛxaCyM][sh z骪gsC!J=S5sɏFZ޹g~fଆq{7J=\~O:fNGQIHK##IBq,VCN5HҨ58\ %_Ms%ՉD h\ `/9 w{`x8ã!4_ܖe tZ2,LʍӣAc\v2x'YCBEM.\/h>EPdWARJ1ND~v', j{00Ä:t\8=A|@;CH1P%OG]`P9HJ$#Iv\#IVgӌ"QZu(ޛKׯ|3GV3*\Q4_٩p4p@rb*UZ+΀mPS |] 5<5,':/eVY wq!wXͷ#_Ld]?ߥ\Kp?1Rx%TNԘ%$]H2Uڪ/\Kp lX V q5]-V3*|*Cq6<{nN J؞{Cêio72O6=t# $vW_VH8{rΟ3Táȫ.S,'?-gߞ:p RiA1M}l 41kqsf[)='5WplFK b{`eafLPk#N5>N&|.z ހo'ʩT۵J/{0' w` T`ئM8;!!I4kJBGBϲzgqw}y3,|2ʫn|+GֱsPюqO-b}:rbثxgL}jt&F 4o>?˺}<ľ9m !N{BI p )V('!hG+Zj*\%P:* ט+Q)U ihaε'{m]^KZf*0Z\-Oo!߇N8ʢ=^iIe2>?]_&џn>BHyN[}_0r.tZUȬ.5Yu9~[{ZSٝBc4)4svo7KYSBωJ&֘jܤ?UAC ";?z0Vlw5Pޕg*d^VG^ )VsD*boE3F(s7 I:;9ҀvιG%'>'ޯԏRze۫1UꤓB4i2N k~*^D5')=q%.}{1˶1x 9dfF,) <<\E$'Iҹx^xJs 0D3UZp+7%'>/%SF,}~7Dwzgnf'OC]o󡫆b.@n&ևL ITHڙm$ɊYzh+ν,O2J4] wiv"p[eQ6F| f{'Z1|l7NYK_qO"Ըh8nkJ.NZ :jepQe-F(3Pqڿ5ĥ^8%4҈B&MF峤bcw)ޯd,̥l?2^Bq~gg3L:LSLm޹;-n sTD>)p 0SjmoŗB6{OJ)?Ʌ/r=Zo* ԂDgOPvJjXFuʚr<҈B&MaZg &KBf'+ d pÕtAn Y:x+*MDwv?}g"w}Knz^n' cc3[DjiiD; o]CX]*IIHBK@{0ՎdJ#L^Y&^nPiD!$!:QZYeo݆diFQ&;\K B!!I!{U<><(Nd;#곯'$UFBHHBqQHҷ!ldiFv2a1,^ 2'cH'^ ƪ|A҈B IB!~!ˮu%8E%ո㪱D.*:<́q-yٹe]_}4BBBP+yFp"CFˉs'''tֻo2ǵ|f)LnBHHB{*W'7+^g" H~W?>WY"2Υ9 |SIZ"c.zO(!B^ p sZ7gR!XR ]pq]CBLf!*5?{o67J ل"!Bɫ!"cÖo{v'm?;<2}wۤB HADʪ}u-?E{׵b *ҥH${c`G!\rHfgsy3-loH KϘq|>WyDD!IDD~ClPE WSR]l!EUG()&" ~aLQHߖґŠ\tvmU{ ,Գg,n; G}qAEI4"$""/9"\Cx7&-)#5tݝ9?U puqaⰳyl\s +Y)" I""CSJ@^՚fRPɖHbU7]5nO<$O;~"TM($ ; Gz`&|=1 ->L<61rN1 ';X*" I"""8\c;U1{W~#年 gr B%3Yt՟~DӄBѽ/HNw`u">yy{k.|9zG,5 5?Tz^BPcG&{~Dq?Xo0Q^[DM($ȉѕåY?nXs*5'ҭPDxvvNbr\oz=K3II$9q]cSQ[ct}Nz[,}:_ j\v7 # IDATe|<.}4"$""'}X a278XBQ&{&oZc4~=krI QoQHkl?vtϦ+9]ZCNi&[rV)S`.<777M($_=nJ:xNՏ;|CToeO34uӌP^䊈Bu14ꨪ+s16翿[{.NKiSY[b&* ^+" I""MA]M0-~=s&_ 'ؐN7MR;QHS * PJnlq!kjΏOܑU"c$95zN9@0څvŷۃכjyb-ćcLϸz!BJ "rF__S&!d_<Br56TU/dQHR DDNa''oӎ3z`X{J| Y;v,;h8̘ w-!|?tID(7@DyA:=w%;Oi+y"" I""+F+o#uT&a+i4%TST]D~EE4i47`SpwWqs /t*﨨-> #T $5.mlzn{㯀Ӭ6d/bJ*ЇɊȀvDteOzCZEFz&:QHc;M&dUue*cxT $r~kXHqwLs !"$"b1tŎ*~kB؇($8a&PU_BAE!*  j?VQHo {l,V ~EoP1DDDDEnbe*4}nR!DDDDGBH:D$9v䮦sTOₓU $2x,FrKrJgj2*Bc*2SY[bI-&p BQHql׌G$'m\5l !"$" W.;obZBDD!IDyqN5ev̔";DQHq.Sr벾V1䄬\ȈԳ LT1DDDDS^tOƑ *=yn#"" I""έG bٖ\ŐߴR;@QHi&$1f@JL*BH J!Y{ɚ̅DtwHspS DD (% 68E;>OU6lmBH?Hc!"LJ5"z9x$cڃYvLQ HNcj bO I֭ZGH3du^kiz16m>p v[z!=T '] fALT!QUU#JBhT!ZA.x?Gem *ja<.t AI*tz [!ɫnr4h jkkuD4 PUb}UATNq/8`hЕ$_i0C`V[[ˮ]tD4]vРB"kFKiu;8-mDˆʑ*0YNZNDZ($uIS{_E1bZUgZX"9 JA*!O!ՋrQ"ͬ={vbhʙ4O?K~AA 36}*Cǩ mH| $_3^ I'o߾̝;WGH3;w.S!Hbh*7Ed*yh %{o;:Қ\C!$%''SWWGnn*f~T ;t e񎹔 vﶿK\P<Ai|=%C[]g͚5 ZXXHcc#iii:D… dȑ* jO.,9*viס5dl~3I OWAڸ 1T}1zh^~eY"_d̘1* >ꈊb'X}¹j,}Us(-"?Yv:ҏGI[!EGGatB999 4ܡNʫ{p\ .*J+0[Ll9چJ&`#| w{.Җ#+Zrlf۶m 4HG)3h b8?zdiͯEo ӂoaEƧ J˙vUQWV8T:)bO}rRP<^I2xzzb2)a0x[`>R{4ܒLBHƥ_$/񪅴m3&ՉvxF#3gԑ&r ! IG1ZS-+Z 9.&B/ ***\hc+1k1,%?4T vRTyC8?O}z3ccC6mv̮ͅ3o( ."nfuba66{ 5 Uc @OhAYM!%{):}GC4e$.ӡ&NM0=Mm&$|駼̟?_G 0a3f`xʺ2ZÖ՘,&$'iQIUمqw[lb` -fXWQ-9=C}|_ I_5:)L•W^ѣU MEUy-惫m&¿ DcX)?@~a{@R" ԤKQ+•U q./o!B xWtEIwL0kVINXe]{ +o5X`HBфEn5P\uÔVŅ`R#{g&WX~@Wya`hZTחSVSHYM!UuUCo iLT@&O#?SA6{ۡO i6a̟?_K_3uT.]JU9s8\LVN|=za;O7/<ܼt  .n\qmZgX͘-&,V :Mu4j7aZ+rjH 댫+1D+P#@< cKDBxe=g4iPH3f̠W_}ѨWڔ2.rBCC9H᩶bkk* j՝Ҧ0dd1Shr#;ow_|} _KA(gMyGEE5550grssUq:999< (--U@qP3 ZXf|f+<9qJoZQFpZ_Sgi0Ñ*ȫ}M. 7:CJcS!/fڵlذݻwUw*ÇaF#N:ѳgOGBB$"F@V,Ժ@f6>2TSII!v`0Yl;>!yƪ$""""m҄ߏC}#Mќ8:5n9Baꠐ$""""9 %u_$;bKl A^_$;VX ooU-DDDy@fꠐ$bG^Zf}wJDDY\9f& I"v9o olVMDDDZ_$;q*@EWDDD幺@IRHOW<6jr73 @QW!I|!硫I""" I-cW!I=d H7 mitqC=E3P qh}c໋?ڸA$S($($ ($Qs%QHiAT$$B5$q j I"-HDDDDDv""""$""""ED!IBBqNDDDDDDDDġq($ uQHQH9ۉB85n$v""" I" I"Qw;QH $҂NDDD!ID!I8n'"""" I""""PԸADZۉ($($GDDDDD!IDDDD7BH Rw;$$㨻($Ӱټ6?8JYq;7@Ɇ<-qT%,XSA!VJin9_﨡V q(!IKĞJ8uqPV[w;SsV=kΈ!~*`̪($S0g@\-0,fcTw^t$ I"-HKDDY*j]x=$ccXTH$$㩻s37Pbu!gK\| 0X(,7$" I"""f[`} x*Sw;wn'NE٘ U NvXm+<̝/+,Y TaDRApOpS)3 |0" 顧O n \cSdC܉=ӕ$q*n'5 fpE,ݮbx) sCO3 .  ``<mnU 7u\&OO\Ro?4. ($fr4 wt_^0:~嗃o5{GY!ZѪ<6X(:Kr an:q$vF%gvҢ[ _U3Ssܷ G!Yt{Ay EJXex-i`/WE}{,,]pZwlr W O 1po:΂Gw7A0vn[[`r7헵fUŃ`N] ق殂kk᩵_ ,o##hZ~hOeAn]Q3U_Xx:bxbTx{u ^\ 9i/\l/jy?tI Hz#- W(s;|^̅7VB|ӳios&տ'XPGH I"gJ DEƾt5nαčZ7 f`g-,Zc ;dmx<: P']#3gõ0k5T?8 H_n <"u%Y08d=`N'(= Q`sFiJ~B#<2 s` 0s4<2R"`qSDK9c1poq6Ӌ=Cݴ|/?f.xp~tNGqpZ?8+rv=y104nY 8w4\ [3b+,<,+]M2UVxNyD"$"b'skO6nD`H4,͂F+,ˆ +``?kXXO|QʟOL0x[xW) :y2=>ݭ:}r;q*G RkYĎ~^7^7(z@ 7M駾 %u`:.;?~y'րZܴ?Vz~Z7qϣ] l! Ǿ ZmPWSewG5ݯtx'ܾ^fW >lzt.gz~JǍvr(/(9O@hho |NU-D!IE<$èDBZ 8GLDXjku09^=C ˏ޼cJ+[7쬳ݷd*Wv;V Yukh)*VԃǶdwTN** /~c0ϓ*] S/ ?] j/vNĩ 9Q_Mk-ܻ7/ e^`c%c3ܶ,Ypb Z˳<$=MO. \Aw?޿ Ǻccl64.V LKep6m=`,=ҴbȰB?Ñ:o>M`C/Bd9yMjA0C.DHL:}s5k,AE񪅴 ;D् d7B́2U\yCR Te{ae@ IDATw{Zq:=VD; G we6x!|á,n,p`Hrsr.-d#l)bh osaa3H<`o t_^;?_)QJX]jk}>X4\O?dzOz2Z3d Qr#|~I`Bm!pPcݗ`!L=!8P`+m*VYAjzNk w+|!, 6B)^eLh㳧}nZBZDDDD֌/í=}Zn'N^5yU0s I"_,ADDe>q($ρc۪lDDDy"^pl[n<$Dĩ .w[X /3 pR۵]۵]۵]-`ap5@hn rq845nSMDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDJ """r̍l^ƪFL.`T/ *BH[b5aE> k|6*^`%?y*8 - Yi)$/67#)SƖHD!IDDD (m W"\8~e]P;z1cUDDDDDڊ$d0VJ*L I" I""""m1#&~~ŨѪ$$"""r޽ppRw[`KM_d8\i kG)*mEY&\ v"DDDD䤥jxbl,5Y `WH ީ5{p$^;Y"޾^ii >zYD!IDDDC>եy%PlpT_.eqs'k(i]mWC\ѵ$7=DDD Wg{vtK*llV8: G)Z$$"""˓ oHYfHd7(m˶B#fFl(d~&T>TFq*Zn'"""(%A [ P.CT# +b=93vn^N ,$""";C62K+ضd%)A/x;H7RMs,9e @۶`EB _H޴ڻ–zHQHvf b8n[ A9"""rJ =~o҈Е$$$?{D䄔CjH =E($H 6 0*bDDDDD;E$Q@? OCD7tW@$Jl uP-DDDD!IDulMDDDDDDbWDDDD!ID[o$ RDDDDDlbEDDD!Ii3_2.ADM`0gUg-*ץ֓.Ko9./|]?R3"""vb\Ӕs乗8t XSsdğ_͘($OqQuvm`1LǤpѤL\7y&OEYڅO?wxF=ٯę}L`拏2u`~a6a<)>1t<} =u:fskIwe""""" Ifvtu~}?z9+4C23Io~ ;?^$ϸⷉyx+ײf{o>!{l|=ٱ. 1ƓC̵PFr wxNo;։ vN^厾n_~&2&zӿK=9ȆqxdΊvHx65""""2tOS$$<=+Ɵb:xly5~=⬿vKfՖ}[Xfk޸/7!(Пz :`DD$ȯYE .=/?qs̛_l'2lq]r]l:E~ۉ5};1.9zsϿPDΩHsaWH-߿q[@ lݛCcٟa=8؟>tI#{XD;+I""mĦM N~˭/GK+Y5O<81WOΌsֻөS'"" I""nzӭCVߟ%5wHVe<8f,w!~g6; 0u-qb[l!9!#uׅvv_Rp_/{zn<~Q59 ܉8.)ܹSB4'KM曧-}527\W>c='h0E/M-r)S&Gց+"$""dbdw>.1$Џ?2it 4 XK8PdztNaǼYTfga:EDDDTY[xQ̜2Ԯ}I|ťmn0 2eMLn=%Ki%j "$}d~w5YRz?r$TdN-†`~n|W]rǝw6jjd۶RVjՀp?Ft'K$iQ/5|M_ |6.ss#bg>k<#D33fvK5vfb. <z2Kyu?ndLTlfG ߴw/̈́ro Sb`.pi`R.Dfvg~'FEhPywGaмB5<s9zve޽*JbԗB^S?R (nVQ ~ApgV+ǷWX$iMEs㬩ȳO;򳯠b?\.eL?gWuOGν5ZFDk]|]Z`q%s󙳬T?Dŋs)TxĘPn.e x+o֭9f$˗/Wg|!]ݒ 2C#\q' [Z34ͷ -_17ޞϥhX )`C3CNԭŅsZ.7a'Lvc*\?7bk>ʺc->)%2siL7.{:fNL9O/ FFA~7(mqK.\`(%r\k! >#D' $̘Ox{8o Iꡓ;9+p!q$a9/=>3'g Kܗb<uuoqUw!13xhF?:0s}VUe2 \'yJ0+!y|KoB߳9gIMM%=^.!Cc yA gU>jf|>-c+xw郌iJ+ky`|:- SP\PWP0GI\icxˇggD錛h_j*rxe3:oO"ؚs?a|-wwaQe %"(*XXݽk뺮vډvb,vcaM#pݟ-5<<ܙs^f>s=1gRL%L _lzA1?szUW{ӵzFfߋ)d8wOƖI7 ̗C)%ņA@(Z+d (5M_&lNi3!\rƁd/$$,*./Svr)TTw)Vą?Hc BL~G1 =o͞l]6%)0_*PPгJ''.r6 T/3koIz:)A?H~vYBQe`7n>qq:''~SIͱ;ş* mu+5(hMI IYrChe*IԢ?K z@{\LPPafGNS%tq߉w.Տ`h딠4( ]p҇P(uxpx(!X颏zM"RPW3 I"s#cc[Ix:ǼS8~~bS;̜BݞTۜ}n̅w!JcYꕧG''R^z3Wj*mtВ[9-VOwZt12w^>ˀ'pf0I ] կ@e)݆tQ&.4 x'$qi !X.L6GW,,gq( !*sj/7SR|#WJ9(עV;jYAE:?u6ٷoJY*{nOqU7>fF 0ra6mXǑ8V1 cyyڬw G^j{/FDsZoƽs}2=[TKӪic5͏015zm{ypרwԭe F@ gSkrwʔr{rP =5d$$ !H3-=3Nrg\7^<oGG~0/F S:_Wbmn6 嗮i߶l5J>a3dr v.!D6uU Ӣvl;xƪs*J;2Log0Kf ڎXrQs"`uh,c!!I!D8tGlchǺz*q5 Iq7ӣy,Ӯ)}g'NȔpw/ zYJu>EaZ= m""%&&Ҥqc/cQgmԸvLo4l̳^_8r5Q\j8Rokx/&1eG)_L$!I!DU2ǖlZgee6f IDATy2c~N왥ݑ.tɔ661é[:Gϵ.<'(w>.4[Q`)[CA$"M͜>39RiG\gHR_^f?|e'fW3{"]Puܝ9zɟɓ&1|ʤHsp j54eBh&I!2Kشn%֒x\<dǬ34c5\~=ݞCWaachZ:2=\B IB!>Ul ĻBR_3p-A4n|7MV͛fsU)+ !vB @Ϛ8Mc͹ wR^qɊݩ㞾&&8&‘DU9y:-Ο=jLϰT^- "ݜ={7= {T*ęT/[4˴vSJΩnlHX~ 3tLo:~P_-,7g6P!CS' B!gdISIZ S!B`kU\F4)M=ShNcϮطhX͕]ԸG7uJ=] k؝z/[\ɵЖ,1?zƧޖ{^BYN!2!X=[߻3u:s/ΒC~!66#O2l4Hŋg>q11<u \5;m$ }gs~z@KB]B IBN8۾4V&[KT6.7QVK0i M4AO/f}+J6mʤILLD|z*wW9f#ܱ>}O,Ʋx34t7Ѝ y6I!& n'dd]Z+_օ߇z\0[ȹZ b%4t*TȒ6{\zg$( !?B-[P(wl. NyLrX-۲l@T^;v`ZE)~&6ETcZH"tyڝ!XRCq9yUť0F v%B" 2SˆfW F8ڵkkT>Ñ#Gr-3% MA[ަo4kLZ!$gŊx6p'_KkC1kpKݺu[.3O@2% e` X7ӌ{Ƣl^7Ҿ}{9xٞ\`5IB?@IJ+ypiOZ~d7/?kNoAAAmٌo*81#]ؓz]RFkbvz~K`c9xٞIBtb RZRnSֻ?g988p/ΥJiZ12JZzpç(׊ʥٰa:tX B&˘ >Ǹ[}i}=7bZ~ZIXəLJ!֭[iߠ<VZٿsBB0226u\OLbۙEI|NRʋN# XG}τ+8g)Su bPBvdΝxxx,ȶLRNClj_i]S;o{z ]nl011! 1S7쵀z,=1zW0t|אka DO)sj#ǙP(AZg 6G:e@Bdkr&)ў;p7*كȺZ|ӧOcHsg9v']e:9xo=[N*~c^G'y!N&QZ6 MI8 bg06n|-gB=e_*r$h._LٲeBdK_u&I D?{_=0TܹPXv,f٧S@Ӣ$7E&߻{jEgd-!dжsɪȩn!wOJ*WH==oėkD^(\M`*ծ_j,?O,%A< X>*͓9 ynCdƢ Ggtqh< ˓+u!ZMmI*5q/^ѣkPAҾiAiong%E 6/fqVɟ\! YcEYdK(tn+խYU6oBLLY!!s䆾eOTsKU >-XP0Ñ5X3"0. m ^KءVi̪ J[z*zF}@+@G7F(0Jt!^~og.G'PjYPM/m>Ӄ+VH_ gL$$5N٥-p> s[+E>Y"`t@9gliLrd|8zsQ~:&Cث˧FPDv0x';Pةj09>)9f3s2]~ Mmre?×Xl  Қ-P=@~{[N-F5,yxMұJ'0)TDEE~_Usړ˂vgos%e>-/>rs BC&\Riw$C_3(hz[iR.yW%Ԙo=js!ׯ/BdiU\LAP 0$EL!)6BpQKVT©vke}ZUɞx #|~?!이"c|akٛ3K\4lRyt2~>S[ <JWCm⧚9KoUgo P}2OYU5UK RקTj<^Q^z-EM/u,#])hbvfWx86MhXU!!IޤD0tDjn԰df3dLٳgKfc~ݏc|Weܖok8RlQG0~|jԨ!EBd rH&$Қ;)׽)u@_\Oz9sN 57x40'ܬhQ;v쐂 !$$ !4a)Y˘kSյFaծUBɒ%iŅ.XFY2^:/BdЯZ;KYCl4Cl! 0H Nf_DbEԸ5<(P@*zr&IlAz8|05ZoKAӉ5=ztg}ݏ:J-Bd pۃ 7!kM}z7pIuE큼 CGG>3KO ˛hl68n06l,Bh=n'_(:2z5 co% e}}}Zlu-i\̍DŽG}W)$|}} @_c0wQ9. Q,?M C\qTa{J:Ѻny]Ͻ{(\T!!I!Ŀ9}J% hlRܹR RhQ ::quV.)m3C!`L )dGIG{U,N (hws.ș3g$$ !$$ !gOVBO0R 6h؏̙02CRRR27R =%(dO)dO.SOz܊={Ν;K1ZMnx{0y!DE {fo8Ѹ?|B޳ z$w vylfyfEAGϹ~)r= J:C}nz~9};^B !IB]n H[\g)d&i۶5[]`PݳWoMg\)a=m[Pml<{񜤤$-BBBTׯ_'!.Vcu Bf6ޯ{CRtL/gᇶ)d2C۟ڂW&BHH.Ə/ 7oRfE zQK!3Iʕ zJ09ֵC Q+s,~Byu떄$!D B| ˔t̫mrXem۴fPԻUMJ:S Unnݒ" !$$e'ƍABbj7oм}lO6v1jבyJ/ksA[Mn.@J3IEŊB l?rN#^P^B^" !$$ !H9r[Ըz*Z@OOreJrzƵ ]" !$$ !HCȶ\ZR,Z\mϗ77=" !$$ !H Iy4'QjU)bQFNhd$EBHHBAAA8ؚkdԬYSEԪU$F=K>|(EBHHBQ忟Dp?9r"ffff\qhk{<\($FWW~u<~cBR,Ʊ@~иvə$!$!)BC^aaygcѩ0$GG I9s"Bh-[ iBB^aA}iKHBNy7X4!$P!!)(X 2n2 44\3$R0q,R?h\S$JZK !g4*Ƶ;Q䚤,PB$$k^-LPB IB!wA0R,@#I26'8P!!I!$$$ոv忳 SSSb5//BHHB iޙ8LLLY 1qn}==B I"[қ"* o]0*ۅf=~eߛ jBn_~|e(_Q\X+[i4mۋZݧ76IzUɌ/ӌ5jc(i?B#ygelHJLD_!u,Ԕױz$$&JZKVաng&#&TysUB%|3!?Kh(6̤g3("v3F3l=.͒J ] 6S+]Bb;dd&G3{Ne=OH~3~E}d 43&DGGcnn.;MIt'!1I#Ϩ !ħ3ILa^{QGqaؼo;ƸT)@[[^H˸ur2,g։.,ӂdmY3ۛ9c龜Ē6 Y[cLs]0eR_f4:&JDysLVֿO!۸D=fv4gUfKk/gʆeSNg+0oܟ1csk]zՐ nj[7/J,ʵcYOtʓM -|jyjG>+z$&i4#c#n$Pc,S4mRߍH8a| [T,Tqi fSL5)7|3P~cN΃+ݕ)kWb ~ GQL `Fz%:(7{aފclqj/[̜WOtG@]r"UJ.}f`l[cۚr|o~HYNF*ˎât{N}> wOhL 5IKD:A.]85adCrr2"U9sbÙ`z8(K^䥁=Ny>TxJr~s6d4ćC!IʓWXH):rpfH~Fވy~!VoKw=ųC w3P?&H 俟G PhJQ^Ѹ~B~y)(b$=]=B"^ko ˎܿ-o*<\$w"s%^cְ\}ΓA Jxu#]$-OR'Tg:G ܽ7&s^0^:`~J(Fy姅 ~0èk PnfZ?<Ł`ۅ]Ƴ4t&;TE$N_k̭ }X Gؙ]1T`䙬&|ِ|(a#YCQW$$B9 @ hgq R (Z Z-whHu{b/jf alnTϼW }/M*]xd}^~Tz#OQ;9mi\ڂ*jT=A(֮(EBڷk5 *hT9׬^A(J2NK#ң^Լ9G@ THܖA[iWs9G)Xe%f1<^|$Byr baiIhIgC=)`] K+)Bkt;-0pl:|w<-)MHYZr#TX3a.E?N5Weg@!"j^H!~08[k\#^ǒHA)BBJ3|2)i- >sƵŒRScb"q+] W^ѸZn)BBRvq{ zm7҃ǏӤI)dpq5Ϡg888H=AֱEPN!˟??hd۫.I$$e'OPtlçȟ?Qd!ܐ\#^͵0>"f>ǏR͵Fѓ˗O($tuu)[ڙO5lR29s1{4/*EBHHB+nȶiTgJ3ٹshZ'.ָF($%Jql{Ѽl%Ed[lr|]'m5?Q!!I![!dinjuIme1yym} _#'%ٛ ~CRK"m%]eQz0W̓[H>(c[-r Lç GeB 7h̪P,!I!)$9;s+F=%V99w2:uy-km@I'{ܕקcgcNgd\5CR I"huj/Mna44IG\|gө n[ɟImƬDy%$!6<|oSMD^7Ѧ/|T1SNֵ0NF2oaUYUnBHHBP#߮~y.)m߶v?3a!m(O^bP'?LYíL^0ҭۡ+#9ߛ;aml] RW"ԠzQ;xӲrBUq-TCSOLnmCeK _x37؅o[4i K>ê>hj^U@<[Ӹvc<{Pc8%YɅym)F]O#^fIH4 B+SfP"ȶ綠p>k>,`理yr~;a$]XA)'{ڌXDqR܍TTE )2#}bT\6akFج)l{F'fLi] m˾ål.hݣV_X֞V2ͼ7͡d;'~5k.P~M\E666uH<6bn6mʦ}1Uwj$1y\.$J)%$iZ-_)_BT\7hlY3li jWӦ67Nbt{r/1qRRe)H7Хlf8(AX_#>m[PyL%`,~?tps ߹M"7m^V4t {qS4q=)fQW*US۽8gº_Ŷ`[[گ~HJ">ǿ>:T휤0H: \} cSW}} |;!LSŋ/H{Iu4&>4![`M0wYX!ŊѳWGcD#0ȳsEKA33,һ6gJLߚUN{jzkW*yD2,''')fz8~\;Q+q$cϛ$(/EWGMB|Bju?)(!ƌCWpl|KNݙV͍a #>2U}"?X@PEE]^BBh:j}mkdhS2B?dzg.O)]Uk~Я]9QN])fzUK;UsPã6a?NgU5|W,"p'PM) 1E:7' `Ń$%S<%qzl{Ʉr9(T%CƆƩ =qCI68%ͭ{ѽq[Dͱkşb"p\JԧYB-C3 \"dlҥ\:%5n6)ĤShaX&=fD|,._~PյW=fkҸ#ݻwJHzcǾn{3xkZ53%sP^3cʞZr *TZX_gGW*06jȌ m$2z]OF.Z[<,F%H:-6o)G3xs$sG藺(kLѩ`"^.692 lBk? 壀{>n#Qo&-x$y*Z$ $UAܹ5<2%yvhV}= C!1{pL{v *'VkW͈[xo|  $$Je_W5T9|}}iҥKܹq5]R\.En])zLc|mwK奠B IB!>E68~]1oHk͛%McϛܡmT ޥAfңy5fOX{ Zh)BHHBqk"c55C'9/ aNUYjrh0vPFi W.EBdpۃ! mj6KӪߗ<{s~`*sq:-nl֒~2,3a?B I&Uħ9pg^V%NcÜ:s^ *saf,}8|& 7{s[I B I&Uħ35Ӄ016Z<%3p )5s:a2fߗ$$4 =="k" tԑ5{NkE_Ќq9)g:u;6ъfiv*I!!I!kwמ)jgN:~p|ӐˆiMHqx[QD5-/+&ĿTXW?z5}+%Iq?QEvSqss !$$ 6➌g=g`wRݶ23X"0?XdTGZ5o&f6ך>m=v~Kq/E24oQsӽG/گU}jX8-+SO){oӂ Sb ޽X!!Ih25>l֚.=ѱe z-k_+%yʶ`Dԯ1NJelÌoFYi}]:xq6L8U &(iaj B*k;f@ Ug;88njUqC o$ﰪO+C "ӘңwԪ~u_FԾm <ųvMI[:H,Ȗd C`VqfǬZӼo3FX?Wq~TZ>?c YTf%k|`;]XFZCW Q뛓o}3vgץ;Q3I~ӨQ2/o-h;UN0.O^b޿];d.*E]YlMOߵqt;80IW?.^Hn'%6nB[{Q嵮o&r\\\BlI$iOôfAf4o5-\()}9$aYZW1v'zjҦ @YZbw$ &tϦM~$wY9:MLxPFLܨX"GCsF"+rss#C!. ԺulXMJҤal_k1Vq*Z\"[h +dQ W\NJLY@'XJxqh|hϞĂ.W$_ahԩ!FoCߓ@éyP|l;rHdYAd]ߦ}F뛻s~~lN~ SSlUpʔ,ΦI=ps.}=Y!!IhOٙUȥ*TnmX.7)atJ?'S`'fn5)A:iݎ4ןu{ Ә)IE4rRka6J٢8GӦnO~[K >iӦr ,DMjJM}EKHxƍ˯*[<y@aWÚ,Δ);~/'nb %@2N[!oCOqzҿJ]u{ʞk.}:i]Ro>Ȓ0fj1A$~)S`63085G09rU F2I&7Fo k}=;1Cvb$I3zX9EOqG8y$m+ɓ'9:6iCY,MW} [ZLn\$*.3Pl'6u='sa~/]HLtL{l7A&XŘ+s";O;&!⿿5L2} &]{ڴoE,旄!;BY6uL>a;`< ?mu5~+2h$(hɄZaaݲȧH"ԩ]  }e]x/ r.% 8lje#xzr"Q`Rs; )+gx/v Y]L'uk~lS^HQo#FhG2i"+Ζ)=)[p]vr,Mh!u7|ޯB* ra[ג݅vcvuVs [t>wW1yN]gfkC\OMpzTgKem`D.)BShyyt<]yf[GjXF2NCə$!H';uf{\\ {Nr!ˁԮ%QsRH~O{$mYyn&u9ڑpjz=t[<+,h4nLIQtլݺڏN (oۀuӋq rxy2mߥlڸMoѯb@b7$`g,n$Ja$$ !x0bli}1̣&Vj={Vc~ijTļ)?3+66u`gΑVh>4m킉BvuA L yuC8N ޹D.oȽrhn`Mbcq>ٗTֵQF%QW!!I!Ļ4oޒQmRԁ2SЃsԩ,^Z6m~1[MGM|.%"A&M+r~I:8]Ǧl|v)B9]ŋ/H{Iu4&>yXRE$@FPqr4rs)diYd☕h崨0IY." ($ p>ry9΋=պZXmtn%ۖ6f]1bgݭI}Сu@ΈT5E+22R^[eeĮطԦNyٻG ~PHY*+*v*򺭂NgE7a^ ,c|dY7/oԇ+\$mGݔt#=NfY:wjɵkAWSbCҦo4%4$l{D2377򭩩Qff>=Hj>7O$HʊoշOOKd7_KCjWlЬY1ڜXGkUn!)fLuA Ф4w?]0UuMB]zǯV#45$ofVv*9Za98[XfܭӲvQEBAe*ܿXglQQ[utQ+PX^8ȉ$^,Ӆdxz c,%|s?=^^dkk;;;JJKKURReee:v2N| =1ʹ=P?wƾS+5`$^_Pޙ3Bnc_p*2r q{+#Ju]Ui5Im+ٵmkBݻvP[[+p];ù=c̳4K>@$"_zb[cDۛ16q!??? a ԋc $78t&#]P'ͩ-cd}Sg $@=?!P"};w1Ix{{+|X=z'cN{kL,OOO:=ln!4H "B?;c:C1;Rϒ$0t3=@VH~%.z?IC_gԊ?Ίd" LG+t#z{jΛq0l@1vdwПf f(D6l@$條{\wdNƽQD6Ѩ^>:qcy_l w$@b0}'pwӄIxt6lV1Hevu҅1H%88D3,yo3~sj—էO߈$@{*?5IOE/nhɌPIDATg4qvqgI< $ =SJm SԴh" ^(;De H?+Ir<^Y0K/y1sjP 8&_D+/1Dy{r63I`BS'י/3F5|6n٦@Zę$0A!!J,Y>K4rX)$ 3Ԙ }ߧ,1$ 3 : |l42ءB&H܉-wGŨAL\UUes-@#_0Po>}z0 J*]f$c v^&dAn+Q$zvw@vH's.2HwZ(G` $@#v)FЋ0H]e@$hĺwlYR:|,4_ȕo˲'3 $h@8MDVV&EI^ĵQAEQQK?R֥}q\]]$@oW߰qj77ޗj ~T $@C0OhRvY Dۓ2J-Or۪RLeh͛=Sstz7t/=tRW{rvvf ѣ?{:Ҙ>כQ~=)'u.c (%%EK/<͙0@‚J]\`bIF!==]˖ YQ#6-6idL N]5gsb 7WԚ5k{?x2ϿК_Sf]v Dضmּn?(@nNMj|m{TIG>GEM{J#G" Д]xQ7lG>PU EUhnqhgumZi\c?>B:u`" -33S Z:*,[a}4$A L{4V)*Sѱ&L0yxxSPPxmS~yo*Y=[6]cRrr"W5S. 0uЁ'$~#G(%%E)I9+W$nY=;K㩩ѹKEJ: :N=ܯիW/< $@pt<-Ui\Nmȹo#׎jfa.ro%jmg#C3 Y[deiAT^iTEQFLEWU]]3ݏʻxYG_ͬ۫|||{DaW^^Ν;bRUTT"hnWTR$k+KYYYJ.]tzڴ!{{{9;;YNNN N3I@$DI@$DI@$DI@$DI@$DI@$ H" $ H" $U Ho IENDB`virtio-queue-0.11.0/src/chain.rs000064400000000000000000000435651046102023000145750ustar 00000000000000// Portions Copyright 2017 The Chromium OS Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE-BSD-3-Clause file. // // Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // Copyright © 2019 Intel Corporation // // Copyright (C) 2020-2021 Alibaba Cloud. All rights reserved. // // SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause use std::fmt::{self, Debug}; use std::mem::size_of; use std::ops::Deref; use vm_memory::{Address, Bytes, GuestAddress, GuestMemory}; use crate::{Descriptor, Error}; use virtio_bindings::bindings::virtio_ring::VRING_DESC_ALIGN_SIZE; /// A virtio descriptor chain. #[derive(Clone, Debug)] pub struct DescriptorChain { mem: M, desc_table: GuestAddress, queue_size: u16, head_index: u16, next_index: u16, ttl: u16, yielded_bytes: u32, is_indirect: bool, } impl DescriptorChain where M: Deref, M::Target: GuestMemory, { fn with_ttl( mem: M, desc_table: GuestAddress, queue_size: u16, ttl: u16, head_index: u16, ) -> Self { DescriptorChain { mem, desc_table, queue_size, head_index, next_index: head_index, ttl, is_indirect: false, yielded_bytes: 0, } } /// Create a new `DescriptorChain` instance. /// /// # Arguments /// * `mem` - the `GuestMemory` object that can be used to access the buffers pointed to by the /// descriptor chain. /// * `desc_table` - the address of the descriptor table. /// * `queue_size` - the size of the queue, which is also the maximum size of a descriptor /// chain. /// * `head_index` - the descriptor index of the chain head. pub(crate) fn new(mem: M, desc_table: GuestAddress, queue_size: u16, head_index: u16) -> Self { Self::with_ttl(mem, desc_table, queue_size, queue_size, head_index) } /// Get the descriptor index of the chain head. pub fn head_index(&self) -> u16 { self.head_index } /// Return a `GuestMemory` object that can be used to access the buffers pointed to by the /// descriptor chain. pub fn memory(&self) -> &M::Target { self.mem.deref() } /// Return an iterator that only yields the readable descriptors in the chain. pub fn readable(self) -> DescriptorChainRwIter { DescriptorChainRwIter { chain: self, writable: false, } } /// Return an iterator that only yields the writable descriptors in the chain. pub fn writable(self) -> DescriptorChainRwIter { DescriptorChainRwIter { chain: self, writable: true, } } // Alters the internal state of the `DescriptorChain` to switch iterating over an // indirect descriptor table defined by `desc`. fn switch_to_indirect_table(&mut self, desc: Descriptor) -> Result<(), Error> { // Check the VIRTQ_DESC_F_INDIRECT flag (i.e., is_indirect) is not set inside // an indirect descriptor. // (see VIRTIO Spec, Section 2.6.5.3.1 Driver Requirements: Indirect Descriptors) if self.is_indirect { return Err(Error::InvalidIndirectDescriptor); } // Alignment requirements for vring elements start from virtio 1.0, // but this is not necessary for address of indirect descriptor. if desc.len() & (VRING_DESC_ALIGN_SIZE - 1) != 0 { return Err(Error::InvalidIndirectDescriptorTable); } // It is safe to do a plain division since we checked above that desc.len() is a multiple of // VRING_DESC_ALIGN_SIZE, and VRING_DESC_ALIGN_SIZE is != 0. let table_len = desc.len() / VRING_DESC_ALIGN_SIZE; if table_len > u32::from(u16::MAX) { return Err(Error::InvalidIndirectDescriptorTable); } self.desc_table = desc.addr(); // try_from cannot fail as we've checked table_len above self.queue_size = u16::try_from(table_len).expect("invalid table_len"); self.next_index = 0; self.ttl = self.queue_size; self.is_indirect = true; Ok(()) } } impl Iterator for DescriptorChain where M: Deref, M::Target: GuestMemory, { type Item = Descriptor; /// Return the next descriptor in this descriptor chain, if there is one. /// /// Note that this is distinct from the next descriptor chain returned by /// [`AvailIter`](struct.AvailIter.html), which is the head of the next /// _available_ descriptor chain. fn next(&mut self) -> Option { if self.ttl == 0 || self.next_index >= self.queue_size { return None; } let desc_addr = self .desc_table // The multiplication can not overflow an u64 since we are multiplying an u16 with a // small number. .checked_add(self.next_index as u64 * size_of::() as u64)?; // The guest device driver should not touch the descriptor once submitted, so it's safe // to use read_obj() here. let desc = self.mem.read_obj::(desc_addr).ok()?; if desc.refers_to_indirect_table() { self.switch_to_indirect_table(desc).ok()?; return self.next(); } // constructing a chain that is longer than 2^32 bytes is illegal, // let's terminate the iteration if something violated this. // (VIRTIO v1.2, 2.7.5.2: "Drivers MUST NOT add a descriptor chain // longer than 2^32 bytes in total;") match self.yielded_bytes.checked_add(desc.len()) { Some(yielded_bytes) => self.yielded_bytes = yielded_bytes, None => return None, }; if desc.has_next() { self.next_index = desc.next(); // It's ok to decrement `self.ttl` here because we check at the start of the method // that it's greater than 0. self.ttl -= 1; } else { self.ttl = 0; } Some(desc) } } /// An iterator for readable or writable descriptors. #[derive(Clone)] pub struct DescriptorChainRwIter { chain: DescriptorChain, writable: bool, } impl Iterator for DescriptorChainRwIter where M: Deref, M::Target: GuestMemory, { type Item = Descriptor; /// Return the next readable/writeable descriptor (depending on the `writable` value) in this /// descriptor chain, if there is one. /// /// Note that this is distinct from the next descriptor chain returned by /// [`AvailIter`](struct.AvailIter.html), which is the head of the next /// _available_ descriptor chain. fn next(&mut self) -> Option { loop { match self.chain.next() { Some(v) => { if v.is_write_only() == self.writable { return Some(v); } } None => return None, } } } } // We can't derive Debug, because rustc doesn't generate the `M::T: Debug` constraint impl Debug for DescriptorChainRwIter where M: Debug, { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("DescriptorChainRwIter") .field("chain", &self.chain) .field("writable", &self.writable) .finish() } } #[cfg(test)] mod tests { use super::*; use crate::mock::{DescriptorTable, MockSplitQueue}; use virtio_bindings::bindings::virtio_ring::{VRING_DESC_F_INDIRECT, VRING_DESC_F_NEXT}; use vm_memory::GuestMemoryMmap; #[test] fn test_checked_new_descriptor_chain() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); assert!(vq.end().0 < 0x1000); // index >= queue_size assert!( DescriptorChain::<&GuestMemoryMmap>::new(m, vq.start(), 16, 16) .next() .is_none() ); // desc_table address is way off assert!( DescriptorChain::<&GuestMemoryMmap>::new(m, GuestAddress(0x00ff_ffff_ffff), 16, 0) .next() .is_none() ); { // the first desc has a normal len, and the next_descriptor flag is set // but the the index of the next descriptor is too large let desc = Descriptor::new(0x1000, 0x1000, VRING_DESC_F_NEXT as u16, 16); vq.desc_table().store(0, desc).unwrap(); let mut c = DescriptorChain::<&GuestMemoryMmap>::new(m, vq.start(), 16, 0); c.next().unwrap(); assert!(c.next().is_none()); } // finally, let's test an ok chain { let desc = Descriptor::new(0x1000, 0x1000, VRING_DESC_F_NEXT as u16, 1); vq.desc_table().store(0, desc).unwrap(); let desc = Descriptor::new(0x2000, 0x1000, 0, 0); vq.desc_table().store(1, desc).unwrap(); let mut c = DescriptorChain::<&GuestMemoryMmap>::new(m, vq.start(), 16, 0); assert_eq!( c.memory() as *const GuestMemoryMmap, m as *const GuestMemoryMmap ); assert_eq!(c.desc_table, vq.start()); assert_eq!(c.queue_size, 16); assert_eq!(c.ttl, c.queue_size); let desc = c.next().unwrap(); assert_eq!(desc.addr(), GuestAddress(0x1000)); assert_eq!(desc.len(), 0x1000); assert_eq!(desc.flags(), VRING_DESC_F_NEXT as u16); assert_eq!(desc.next(), 1); assert_eq!(c.ttl, c.queue_size - 1); assert!(c.next().is_some()); // The descriptor above was the last from the chain, so `ttl` should be 0 now. assert_eq!(c.ttl, 0); assert!(c.next().is_none()); assert_eq!(c.ttl, 0); } } #[test] fn test_ttl_wrap_around() { const QUEUE_SIZE: u16 = 16; let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x100000)]).unwrap(); let vq = MockSplitQueue::new(m, QUEUE_SIZE); // Populate the entire descriptor table with entries. Only the last one should not have the // VIRTQ_DESC_F_NEXT set. for i in 0..QUEUE_SIZE - 1 { let desc = Descriptor::new( 0x1000 * (i + 1) as u64, 0x1000, VRING_DESC_F_NEXT as u16, i + 1, ); vq.desc_table().store(i, desc).unwrap(); } let desc = Descriptor::new((0x1000 * 16) as u64, 0x1000, 0, 0); vq.desc_table().store(QUEUE_SIZE - 1, desc).unwrap(); let mut c = DescriptorChain::<&GuestMemoryMmap>::new(m, vq.start(), QUEUE_SIZE, 0); assert_eq!(c.ttl, c.queue_size); // Validate that `ttl` wraps around even when the entire descriptor table is populated. for i in 0..QUEUE_SIZE { let _desc = c.next().unwrap(); assert_eq!(c.ttl, c.queue_size - i - 1); } assert!(c.next().is_none()); } #[test] fn test_new_from_indirect_descriptor() { // This is testing that chaining an indirect table works as expected. It is also a negative // test for the following requirement from the spec: // `A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT in flags.`. In // case the driver is setting both of these flags, we check that the device doesn't panic. let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let dtable = vq.desc_table(); // Create a chain with one normal descriptor and one pointing to an indirect table. let desc = Descriptor::new(0x6000, 0x1000, VRING_DESC_F_NEXT as u16, 1); dtable.store(0, desc).unwrap(); // The spec forbids setting both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT in flags. We do // not currently enforce this rule, we just ignore the VIRTQ_DESC_F_NEXT flag. let desc = Descriptor::new( 0x7000, 0x1000, (VRING_DESC_F_INDIRECT | VRING_DESC_F_NEXT) as u16, 2, ); dtable.store(1, desc).unwrap(); let desc = Descriptor::new(0x8000, 0x1000, 0, 0); dtable.store(2, desc).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); // create an indirect table with 4 chained descriptors let idtable = DescriptorTable::new(m, GuestAddress(0x7000), 4); for i in 0..4u16 { let desc: Descriptor = if i < 3 { Descriptor::new(0x1000 * i as u64, 0x1000, VRING_DESC_F_NEXT as u16, i + 1) } else { Descriptor::new(0x1000 * i as u64, 0x1000, 0, 0) }; idtable.store(i, desc).unwrap(); } assert_eq!(c.head_index(), 0); // Consume the first descriptor. c.next().unwrap(); // The chain logic hasn't parsed the indirect descriptor yet. assert!(!c.is_indirect); // Try to iterate through the indirect descriptor chain. for i in 0..4 { let desc = c.next().unwrap(); assert!(c.is_indirect); if i < 3 { assert_eq!(desc.flags(), VRING_DESC_F_NEXT as u16); assert_eq!(desc.next(), i + 1); } } // Even though we added a new descriptor after the one that is pointing to the indirect // table, this descriptor won't be available when parsing the chain. assert!(c.next().is_none()); } #[test] fn test_indirect_descriptor_address_noaligned() { // Alignment requirements for vring elements start from virtio 1.0, // but this is not necessary for address of indirect descriptor. let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let dtable = vq.desc_table(); // Create a chain with a descriptor pointing to an indirect table with unaligned address. let desc = Descriptor::new( 0x7001, 0x1000, (VRING_DESC_F_INDIRECT | VRING_DESC_F_NEXT) as u16, 2, ); dtable.store(0, desc).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); // Create an indirect table with 4 chained descriptors. let idtable = DescriptorTable::new(m, GuestAddress(0x7001), 4); for i in 0..4u16 { let desc: Descriptor = if i < 3 { Descriptor::new(0x1000 * i as u64, 0x1000, VRING_DESC_F_NEXT as u16, i + 1) } else { Descriptor::new(0x1000 * i as u64, 0x1000, 0, 0) }; idtable.store(i, desc).unwrap(); } // Try to iterate through the indirect descriptor chain. for i in 0..4 { let desc = c.next().unwrap(); assert!(c.is_indirect); if i < 3 { assert_eq!(desc.flags(), VRING_DESC_F_NEXT as u16); assert_eq!(desc.next(), i + 1); } } } #[test] fn test_indirect_descriptor_err() { // We are testing here different misconfigurations of the indirect table. For these error // case scenarios, the iterator over the descriptor chain won't return a new descriptor. { let m = &GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); // Create a chain with a descriptor pointing to an invalid indirect table: len not a // multiple of descriptor size. let desc = Descriptor::new(0x1000, 0x1001, VRING_DESC_F_INDIRECT as u16, 0); vq.desc_table().store(0, desc).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); assert!(c.next().is_none()); } { let m = &GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); // Create a chain with a descriptor pointing to an invalid indirect table: table len > // u16::MAX. let desc = Descriptor::new( 0x1000, (u16::MAX as u32 + 1) * VRING_DESC_ALIGN_SIZE, VRING_DESC_F_INDIRECT as u16, 0, ); vq.desc_table().store(0, desc).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); assert!(c.next().is_none()); } { let m = &GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); // Create a chain with a descriptor pointing to an indirect table. let desc = Descriptor::new(0x1000, 0x1000, VRING_DESC_F_INDIRECT as u16, 0); vq.desc_table().store(0, desc).unwrap(); // It's ok for an indirect descriptor to have flags = 0. let desc = Descriptor::new(0x3000, 0x1000, 0, 0); m.write_obj(desc, GuestAddress(0x1000)).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); assert!(c.next().is_some()); // But it's not allowed to have an indirect descriptor that points to another indirect // table. let desc = Descriptor::new(0x3000, 0x1000, VRING_DESC_F_INDIRECT as u16, 0); m.write_obj(desc, GuestAddress(0x1000)).unwrap(); let mut c: DescriptorChain<&GuestMemoryMmap> = DescriptorChain::new(m, vq.start(), 16, 0); assert!(c.next().is_none()); } } } virtio-queue-0.11.0/src/defs.rs000064400000000000000000000027231046102023000144230ustar 00000000000000// Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause //! Virtio queue related constant definitions /// Size of used ring header: flags (u16) + idx (u16) pub(crate) const VIRTQ_USED_RING_HEADER_SIZE: u64 = 4; /// Size of the used ring metadata: header + avail_event (le16). /// /// The total size of the used ring is: /// VIRTQ_USED_RING_META_SIZE + VIRTQ_USED_ELEMENT_SIZE * queue_size. pub(crate) const VIRTQ_USED_RING_META_SIZE: u64 = VIRTQ_USED_RING_HEADER_SIZE + 2; /// Size of one element in the used ring, id (le32) + len (le32). pub(crate) const VIRTQ_USED_ELEMENT_SIZE: u64 = 8; /// Size of available ring header: flags(u16) + idx(u16) pub(crate) const VIRTQ_AVAIL_RING_HEADER_SIZE: u64 = 4; /// Size of the available ring metadata: header + used_event (le16). /// /// The total size of the available ring is: /// VIRTQ_AVAIL_RING_META_SIZE + VIRTQ_AVAIL_ELEMENT_SIZE * queue_size. pub(crate) const VIRTQ_AVAIL_RING_META_SIZE: u64 = VIRTQ_AVAIL_RING_HEADER_SIZE + 2; /// Size of one element in the available ring (le16). pub(crate) const VIRTQ_AVAIL_ELEMENT_SIZE: u64 = 2; /// Default guest physical address for descriptor table. pub(crate) const DEFAULT_DESC_TABLE_ADDR: u64 = 0x0; /// Default guest physical address for available ring. pub(crate) const DEFAULT_AVAIL_RING_ADDR: u64 = 0x0; /// Default guest physical address for used ring. pub(crate) const DEFAULT_USED_RING_ADDR: u64 = 0x0; virtio-queue-0.11.0/src/descriptor.rs000064400000000000000000000216111046102023000156550ustar 00000000000000// Portions Copyright 2017 The Chromium OS Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE-BSD-3-Clause file. // // Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // Copyright © 2019 Intel Corporation // // Copyright (C) 2020-2021 Alibaba Cloud. All rights reserved. // // SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause use vm_memory::{ByteValued, GuestAddress, Le16, Le32, Le64}; use virtio_bindings::bindings::virtio_ring::{ VRING_DESC_F_INDIRECT, VRING_DESC_F_NEXT, VRING_DESC_F_WRITE, }; /// A virtio descriptor constraints with C representation. /// /// # Example /// /// ```rust /// # use virtio_bindings::bindings::virtio_ring::{VRING_DESC_F_NEXT, VRING_DESC_F_WRITE}; /// # use virtio_queue::mock::MockSplitQueue; /// use virtio_queue::{Descriptor, Queue, QueueOwnedT}; /// use vm_memory::{GuestAddress, GuestMemoryMmap}; /// /// # fn populate_queue(m: &GuestMemoryMmap) -> Queue { /// # let vq = MockSplitQueue::new(m, 16); /// # let mut q = vq.create_queue().unwrap(); /// # /// # // We have only one chain: (0, 1). /// # let desc = Descriptor::new(0x1000, 0x1000, VRING_DESC_F_NEXT as u16, 1); /// # vq.desc_table().store(0, desc); /// # let desc = Descriptor::new(0x2000, 0x1000, VRING_DESC_F_WRITE as u16, 0); /// # vq.desc_table().store(1, desc); /// # /// # vq.avail().ring().ref_at(0).unwrap().store(u16::to_le(0)); /// # vq.avail().idx().store(u16::to_le(1)); /// # q /// # } /// let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); /// // Populate the queue with descriptor chains and update the available ring accordingly. /// let mut queue = populate_queue(m); /// let mut i = queue.iter(m).unwrap(); /// let mut c = i.next().unwrap(); /// /// // Get the first descriptor and access its fields. /// let desc = c.next().unwrap(); /// let _addr = desc.addr(); /// let _len = desc.len(); /// let _flags = desc.flags(); /// let _next = desc.next(); /// let _is_write_only = desc.is_write_only(); /// let _has_next = desc.has_next(); /// let _refers_to_ind_table = desc.refers_to_indirect_table(); /// ``` // Note that the `ByteValued` implementation of this structure expects the `Descriptor` to store // only plain old data types. #[repr(C)] #[derive(Default, Clone, Copy, Debug)] pub struct Descriptor { /// Guest physical address of device specific data. addr: Le64, /// Length of device specific data. len: Le32, /// Includes next, write, and indirect bits. flags: Le16, /// Index into the descriptor table of the next descriptor if flags has the `next` bit set. next: Le16, } #[allow(clippy::len_without_is_empty)] impl Descriptor { /// Return the guest physical address of the descriptor buffer. pub fn addr(&self) -> GuestAddress { GuestAddress(self.addr.into()) } /// Return the length of the descriptor buffer. pub fn len(&self) -> u32 { self.len.into() } /// Return the flags for this descriptor, including next, write and indirect bits. pub fn flags(&self) -> u16 { self.flags.into() } /// Return the value stored in the `next` field of the descriptor. pub fn next(&self) -> u16 { self.next.into() } /// Check whether this descriptor refers to a buffer containing an indirect descriptor table. pub fn refers_to_indirect_table(&self) -> bool { self.flags() & VRING_DESC_F_INDIRECT as u16 != 0 } /// Check whether the `VIRTQ_DESC_F_NEXT` is set for the descriptor. pub fn has_next(&self) -> bool { self.flags() & VRING_DESC_F_NEXT as u16 != 0 } /// Check if the driver designated this as a write only descriptor. /// /// If this is false, this descriptor is read only. /// Write only means the the emulated device can write and the driver can read. pub fn is_write_only(&self) -> bool { self.flags() & VRING_DESC_F_WRITE as u16 != 0 } } #[cfg(any(test, feature = "test-utils"))] impl Descriptor { /// Create a new descriptor. /// /// # Arguments /// * `addr` - the guest physical address of the descriptor buffer. /// * `len` - the length of the descriptor buffer. /// * `flags` - the `flags` for the descriptor. /// * `next` - the `next` field of the descriptor. pub fn new(addr: u64, len: u32, flags: u16, next: u16) -> Self { Descriptor { addr: addr.into(), len: len.into(), flags: flags.into(), next: next.into(), } } /// Set the guest physical address of the descriptor buffer. pub fn set_addr(&mut self, addr: u64) { self.addr = addr.into(); } /// Set the length of the descriptor buffer. pub fn set_len(&mut self, len: u32) { self.len = len.into(); } /// Set the flags for this descriptor. pub fn set_flags(&mut self, flags: u16) { self.flags = flags.into(); } /// Set the value stored in the `next` field of the descriptor. pub fn set_next(&mut self, next: u16) { self.next = next.into(); } } // SAFETY: This is safe because `Descriptor` contains only wrappers over POD types and // all accesses through safe `vm-memory` API will validate any garbage that could be // included in there. unsafe impl ByteValued for Descriptor {} /// Represents the contents of an element from the used virtqueue ring. // Note that the `ByteValued` implementation of this structure expects the `VirtqUsedElem` to store // only plain old data types. #[repr(C)] #[derive(Clone, Copy, Default, Debug)] pub struct VirtqUsedElem { id: Le32, len: Le32, } impl VirtqUsedElem { /// Create a new `VirtqUsedElem` instance. /// /// # Arguments /// * `id` - the index of the used descriptor chain. /// * `len` - the total length of the descriptor chain which was used (written to). pub(crate) fn new(id: u32, len: u32) -> Self { VirtqUsedElem { id: id.into(), len: len.into(), } } } #[cfg(any(test, feature = "test-utils"))] #[allow(clippy::len_without_is_empty)] impl VirtqUsedElem { /// Get the index of the used descriptor chain. pub fn id(&self) -> u32 { self.id.into() } /// Get `length` field of the used ring entry. pub fn len(&self) -> u32 { self.len.into() } } // SAFETY: This is safe because `VirtqUsedElem` contains only wrappers over POD types // and all accesses through safe `vm-memory` API will validate any garbage that could be // included in there. unsafe impl ByteValued for VirtqUsedElem {} #[cfg(test)] mod tests { use super::*; use memoffset::offset_of; use std::mem::{align_of, size_of}; #[test] fn test_descriptor_offset() { assert_eq!(size_of::(), 16); assert_eq!(offset_of!(Descriptor, addr), 0); assert_eq!(offset_of!(Descriptor, len), 8); assert_eq!(offset_of!(Descriptor, flags), 12); assert_eq!(offset_of!(Descriptor, next), 14); assert!(align_of::() <= 16); } #[test] fn test_descriptor_getter_setter() { let mut desc = Descriptor::new(0, 0, 0, 0); desc.set_addr(0x1000); assert_eq!(desc.addr(), GuestAddress(0x1000)); desc.set_len(0x2000); assert_eq!(desc.len(), 0x2000); desc.set_flags(VRING_DESC_F_NEXT as u16); assert_eq!(desc.flags(), VRING_DESC_F_NEXT as u16); assert!(desc.has_next()); assert!(!desc.is_write_only()); assert!(!desc.refers_to_indirect_table()); desc.set_flags(VRING_DESC_F_WRITE as u16); assert_eq!(desc.flags(), VRING_DESC_F_WRITE as u16); assert!(!desc.has_next()); assert!(desc.is_write_only()); assert!(!desc.refers_to_indirect_table()); desc.set_flags(VRING_DESC_F_INDIRECT as u16); assert_eq!(desc.flags(), VRING_DESC_F_INDIRECT as u16); assert!(!desc.has_next()); assert!(!desc.is_write_only()); assert!(desc.refers_to_indirect_table()); desc.set_next(3); assert_eq!(desc.next(), 3); } #[test] fn test_descriptor_copy() { let e1 = Descriptor::new(1, 2, VRING_DESC_F_NEXT as u16, 3); let mut e2 = Descriptor::default(); e2.as_mut_slice().copy_from_slice(e1.as_slice()); assert_eq!(e1.addr(), e2.addr()); assert_eq!(e1.len(), e2.len()); assert_eq!(e1.flags(), e2.flags()); assert_eq!(e1.next(), e2.next()); } #[test] fn test_used_elem_offset() { assert_eq!(offset_of!(VirtqUsedElem, id), 0); assert_eq!(offset_of!(VirtqUsedElem, len), 4); assert_eq!(size_of::(), 8); } #[test] fn test_used_elem_copy() { let e1 = VirtqUsedElem::new(3, 15); let mut e2 = VirtqUsedElem::new(0, 0); e2.as_mut_slice().copy_from_slice(e1.as_slice()); assert_eq!(e1.id, e2.id); assert_eq!(e1.len, e2.len); } } virtio-queue-0.11.0/src/lib.rs000064400000000000000000000232101046102023000142420ustar 00000000000000// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. // // Portions Copyright 2017 The Chromium OS Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE-BSD-3-Clause file. // // Copyright © 2019 Intel Corporation // // Copyright (C) 2020-2021 Alibaba Cloud. All rights reserved. // // SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause //! Virtio queue API for backend device drivers to access virtio queues. #![deny(missing_docs)] use std::fmt::{self, Debug, Display}; use std::num::Wrapping; use std::ops::{Deref, DerefMut}; use std::sync::atomic::Ordering; use log::error; use vm_memory::{GuestMemory, GuestMemoryError}; pub use self::chain::{DescriptorChain, DescriptorChainRwIter}; pub use self::descriptor::{Descriptor, VirtqUsedElem}; pub use self::queue::{AvailIter, Queue}; pub use self::queue_sync::QueueSync; pub use self::state::QueueState; pub mod defs; #[cfg(any(test, feature = "test-utils"))] pub mod mock; mod chain; mod descriptor; mod queue; mod queue_sync; mod state; /// Virtio Queue related errors. #[derive(Debug)] pub enum Error { /// Address overflow. AddressOverflow, /// Failed to access guest memory. GuestMemory(GuestMemoryError), /// Invalid indirect descriptor. InvalidIndirectDescriptor, /// Invalid indirect descriptor table. InvalidIndirectDescriptorTable, /// Invalid descriptor chain. InvalidChain, /// Invalid descriptor index. InvalidDescriptorIndex, /// Invalid max_size. InvalidMaxSize, /// Invalid Queue Size. InvalidSize, /// Invalid alignment of descriptor table address. InvalidDescTableAlign, /// Invalid alignment of available ring address. InvalidAvailRingAlign, /// Invalid alignment of used ring address. InvalidUsedRingAlign, /// Invalid available ring index. InvalidAvailRingIndex, /// The queue is not ready for operation. QueueNotReady, } impl Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { use self::Error::*; match self { AddressOverflow => write!(f, "address overflow"), GuestMemory(_) => write!(f, "error accessing guest memory"), InvalidChain => write!(f, "invalid descriptor chain"), InvalidIndirectDescriptor => write!(f, "invalid indirect descriptor"), InvalidIndirectDescriptorTable => write!(f, "invalid indirect descriptor table"), InvalidDescriptorIndex => write!(f, "invalid descriptor index"), InvalidMaxSize => write!(f, "invalid queue maximum size"), InvalidSize => write!(f, "invalid queue size"), InvalidDescTableAlign => write!( f, "virtio queue descriptor table breaks alignment constraints" ), InvalidAvailRingAlign => write!( f, "virtio queue available ring breaks alignment constraints" ), InvalidUsedRingAlign => { write!(f, "virtio queue used ring breaks alignment constraints") } InvalidAvailRingIndex => write!( f, "invalid available ring index (more descriptors to process than queue size)" ), QueueNotReady => write!(f, "trying to process requests on a queue that's not ready"), } } } impl std::error::Error for Error {} /// Trait for objects returned by `QueueT::lock()`. pub trait QueueGuard<'a> { /// Type for guard returned by `Self::lock()`. type G: DerefMut; } /// Trait to access and manipulate a virtio queue. /// /// To optimize for performance, different implementations of the `QueueT` trait may be /// provided for single-threaded context and multi-threaded context. /// /// Using Higher-Rank Trait Bounds (HRTBs) to effectively define an associated type that has a /// lifetime parameter, without tagging the `QueueT` trait with a lifetime as well. pub trait QueueT: for<'a> QueueGuard<'a> { /// Construct an empty virtio queue state object with the given `max_size`. /// /// Returns an error if `max_size` is invalid. fn new(max_size: u16) -> Result where Self: Sized; /// Check whether the queue configuration is valid. fn is_valid(&self, mem: &M) -> bool; /// Reset the queue to the initial state. fn reset(&mut self); /// Get an exclusive reference to the underlying `Queue` object. /// /// Logically this method will acquire the underlying lock protecting the `Queue` Object. /// The lock will be released when the returned object gets dropped. fn lock(&mut self) -> ::G; /// Get the maximum size of the virtio queue. fn max_size(&self) -> u16; /// Get the actual size configured by the guest. fn size(&self) -> u16; /// Configure the queue size for the virtio queue. fn set_size(&mut self, size: u16); /// Check whether the queue is ready to be processed. fn ready(&self) -> bool; /// Configure the queue to `ready for processing` state. fn set_ready(&mut self, ready: bool); /// Set the descriptor table address for the queue. /// /// The descriptor table address is 64-bit, the corresponding part will be updated if 'low' /// and/or `high` is `Some` and valid. fn set_desc_table_address(&mut self, low: Option, high: Option); /// Set the available ring address for the queue. /// /// The available ring address is 64-bit, the corresponding part will be updated if 'low' /// and/or `high` is `Some` and valid. fn set_avail_ring_address(&mut self, low: Option, high: Option); /// Set the used ring address for the queue. /// /// The used ring address is 64-bit, the corresponding part will be updated if 'low' /// and/or `high` is `Some` and valid. fn set_used_ring_address(&mut self, low: Option, high: Option); /// Enable/disable the VIRTIO_F_RING_EVENT_IDX feature for interrupt coalescing. fn set_event_idx(&mut self, enabled: bool); /// Read the `idx` field from the available ring. /// /// # Panics /// /// Panics if order is Release or AcqRel. fn avail_idx(&self, mem: &M, order: Ordering) -> Result, Error> where M: GuestMemory + ?Sized; /// Read the `idx` field from the used ring. /// /// # Panics /// /// Panics if order is Release or AcqRel. fn used_idx(&self, mem: &M, order: Ordering) -> Result, Error>; /// Put a used descriptor head into the used ring. fn add_used(&mut self, mem: &M, head_index: u16, len: u32) -> Result<(), Error>; /// Enable notification events from the guest driver. /// /// Return true if one or more descriptors can be consumed from the available ring after /// notifications were enabled (and thus it's possible there will be no corresponding /// notification). fn enable_notification(&mut self, mem: &M) -> Result; /// Disable notification events from the guest driver. fn disable_notification(&mut self, mem: &M) -> Result<(), Error>; /// Check whether a notification to the guest is needed. /// /// Please note this method has side effects: once it returns `true`, it considers the /// driver will actually be notified, remember the associated index in the used ring, and /// won't return `true` again until the driver updates `used_event` and/or the notification /// conditions hold once more. fn needs_notification(&mut self, mem: &M) -> Result; /// Return the index of the next entry in the available ring. fn next_avail(&self) -> u16; /// Set the index of the next entry in the available ring. fn set_next_avail(&mut self, next_avail: u16); /// Return the index for the next descriptor in the used ring. fn next_used(&self) -> u16; /// Set the index for the next descriptor in the used ring. fn set_next_used(&mut self, next_used: u16); /// Return the address of the descriptor table. fn desc_table(&self) -> u64; /// Return the address of the available ring. fn avail_ring(&self) -> u64; /// Return the address of the used ring. fn used_ring(&self) -> u64; /// Checks whether `VIRTIO_F_RING_EVENT_IDX` is negotiated. /// /// This getter is only returning the correct value after the device passes the `FEATURES_OK` /// status. fn event_idx_enabled(&self) -> bool; /// Pop and return the next available descriptor chain, or `None` when there are no more /// descriptor chains available. /// /// This enables the consumption of available descriptor chains in a "one at a time" /// manner, without having to hold a borrow after the method returns. fn pop_descriptor_chain(&mut self, mem: M) -> Option> where M: Clone + Deref, M::Target: GuestMemory; } /// Trait to access and manipulate a Virtio queue that's known to be exclusively accessed /// by a single execution thread. pub trait QueueOwnedT: QueueT { /// Get a consuming iterator over all available descriptor chain heads offered by the driver. /// /// # Arguments /// * `mem` - the `GuestMemory` object that can be used to access the queue buffers. fn iter(&mut self, mem: M) -> Result, Error> where M: Deref, M::Target: GuestMemory; /// Undo the last advancement of the next available index field by decrementing its /// value by one. fn go_to_previous_position(&mut self); } virtio-queue-0.11.0/src/mock.rs000064400000000000000000000422051046102023000144320ustar 00000000000000// Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 OR BSD-3-Clause //! Utilities used by unit tests and benchmarks for mocking the driver side //! of the virtio protocol. use std::marker::PhantomData; use std::mem::size_of; use vm_memory::{ Address, ByteValued, Bytes, GuestAddress, GuestMemory, GuestMemoryError, GuestUsize, }; use crate::defs::{VIRTQ_AVAIL_ELEMENT_SIZE, VIRTQ_AVAIL_RING_HEADER_SIZE}; use crate::{Descriptor, DescriptorChain, Error, Queue, QueueOwnedT, QueueT, VirtqUsedElem}; use std::fmt::{self, Debug, Display}; use virtio_bindings::bindings::virtio_ring::{VRING_DESC_F_INDIRECT, VRING_DESC_F_NEXT}; /// Mock related errors. #[derive(Debug)] pub enum MockError { /// Cannot create the Queue object due to invalid parameters. InvalidQueueParams(Error), /// Invalid Ref index InvalidIndex, /// Invalid next avail InvalidNextAvail, /// Guest memory errors GuestMem(GuestMemoryError), } impl Display for MockError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { use self::MockError::*; match self { InvalidQueueParams(_) => write!(f, "cannot create queue due to invalid parameter"), InvalidIndex => write!( f, "invalid index for pointing to an address in a region when defining a Ref object" ), InvalidNextAvail => write!( f, "invalid next available descriptor chain head in the queue" ), GuestMem(e) => write!(f, "guest memory error: {}", e), } } } impl std::error::Error for MockError {} /// Wrapper struct used for accessing a particular address of a GuestMemory area. pub struct Ref<'a, M, T> { mem: &'a M, addr: GuestAddress, phantom: PhantomData<*const T>, } impl<'a, M: GuestMemory, T: ByteValued> Ref<'a, M, T> { fn new(mem: &'a M, addr: GuestAddress) -> Self { Ref { mem, addr, phantom: PhantomData, } } /// Read an object of type T from the underlying memory found at self.addr. pub fn load(&self) -> T { self.mem.read_obj(self.addr).unwrap() } /// Write an object of type T from the underlying memory found at self.addr. pub fn store(&self, val: T) { self.mem.write_obj(val, self.addr).unwrap() } } /// Wrapper struct used for accessing a subregion of a GuestMemory area. pub struct ArrayRef<'a, M, T> { mem: &'a M, addr: GuestAddress, len: usize, phantom: PhantomData<*const T>, } impl<'a, M: GuestMemory, T: ByteValued> ArrayRef<'a, M, T> { fn new(mem: &'a M, addr: GuestAddress, len: usize) -> Self { ArrayRef { mem, addr, len, phantom: PhantomData, } } /// Return a `Ref` object pointing to an address defined by a particular /// index offset in the region. pub fn ref_at(&self, index: usize) -> Result, MockError> { if index >= self.len { return Err(MockError::InvalidIndex); } let addr = self .addr .checked_add((index * size_of::()) as u64) .unwrap(); Ok(Ref::new(self.mem, addr)) } } /// Represents a virtio queue ring. The only difference between the used and available rings, /// is the ring element type. pub struct SplitQueueRing<'a, M, T: ByteValued> { flags: Ref<'a, M, u16>, // The value stored here should more precisely be a `Wrapping`, but that would require a // `ByteValued` impl for this type, which is not provided in vm-memory. Implementing the trait // here would require defining a wrapper for `Wrapping` and that would be too much for a // mock framework that is only used in tests. idx: Ref<'a, M, u16>, ring: ArrayRef<'a, M, T>, // `used_event` for `AvailRing`, `avail_event` for `UsedRing`. event: Ref<'a, M, u16>, } impl<'a, M: GuestMemory, T: ByteValued> SplitQueueRing<'a, M, T> { /// Create a new `SplitQueueRing` instance pub fn new(mem: &'a M, base: GuestAddress, len: u16) -> Self { let event_addr = base .checked_add(4) .and_then(|a| a.checked_add((size_of::() * len as usize) as u64)) .unwrap(); let split_queue_ring = SplitQueueRing { flags: Ref::new(mem, base), idx: Ref::new(mem, base.checked_add(2).unwrap()), ring: ArrayRef::new(mem, base.checked_add(4).unwrap(), len as usize), event: Ref::new(mem, event_addr), }; split_queue_ring.flags.store(0); split_queue_ring.idx.store(0); split_queue_ring.event.store(0); split_queue_ring } /// Return the starting address of the `SplitQueueRing`. pub fn start(&self) -> GuestAddress { self.ring.addr } /// Return the end address of the `SplitQueueRing`. pub fn end(&self) -> GuestAddress { self.start() .checked_add(self.ring.len as GuestUsize) .unwrap() } /// Return a reference to the idx field. pub fn idx(&self) -> &Ref<'a, M, u16> { &self.idx } /// Return a reference to the ring field. pub fn ring(&self) -> &ArrayRef<'a, M, T> { &self.ring } } /// The available ring is used by the driver to offer buffers to the device. pub type AvailRing<'a, M> = SplitQueueRing<'a, M, u16>; /// The used ring is where the device returns buffers once it is done with them. pub type UsedRing<'a, M> = SplitQueueRing<'a, M, VirtqUsedElem>; /// Refers to the buffers the driver is using for the device. pub struct DescriptorTable<'a, M> { table: ArrayRef<'a, M, Descriptor>, len: u16, free_descriptors: Vec, } impl<'a, M: GuestMemory> DescriptorTable<'a, M> { /// Create a new `DescriptorTable` instance pub fn new(mem: &'a M, addr: GuestAddress, len: u16) -> Self { let table = ArrayRef::new(mem, addr, len as usize); let free_descriptors = (0..len).rev().collect(); DescriptorTable { table, len, free_descriptors, } } /// Read one descriptor from the specified index. pub fn load(&self, index: u16) -> Result { self.table .ref_at(index as usize) .map(|load_ref| load_ref.load()) } /// Write one descriptor at the specified index. pub fn store(&self, index: u16, value: Descriptor) -> Result<(), MockError> { self.table .ref_at(index as usize) .map(|store_ref| store_ref.store(value)) } /// Return the total size of the DescriptorTable in bytes. pub fn total_size(&self) -> u64 { (self.len as usize * size_of::()) as u64 } /// Create a chain of descriptors. pub fn build_chain(&mut self, len: u16) -> Result { let indices = self .free_descriptors .iter() .copied() .rev() .take(usize::from(len)) .collect::>(); assert_eq!(indices.len(), len as usize); for (pos, index_value) in indices.iter().copied().enumerate() { // Addresses and lens constant for now. let mut desc = Descriptor::new(0x1000, 0x1000, 0, 0); // It's not the last descriptor in the chain. if pos < indices.len() - 1 { desc.set_flags(VRING_DESC_F_NEXT as u16); desc.set_next(indices[pos + 1]); } else { desc.set_flags(0); } self.store(index_value, desc)?; } Ok(indices[0]) } } trait GuestAddressExt { fn align_up(&self, x: GuestUsize) -> GuestAddress; } impl GuestAddressExt for GuestAddress { fn align_up(&self, x: GuestUsize) -> GuestAddress { Self((self.0 + (x - 1)) & !(x - 1)) } } /// A mock version of the virtio queue implemented from the perspective of the driver. pub struct MockSplitQueue<'a, M> { mem: &'a M, len: u16, desc_table_addr: GuestAddress, desc_table: DescriptorTable<'a, M>, avail_addr: GuestAddress, avail: AvailRing<'a, M>, used_addr: GuestAddress, used: UsedRing<'a, M>, indirect_addr: GuestAddress, } impl<'a, M: GuestMemory> MockSplitQueue<'a, M> { /// Create a new `MockSplitQueue` instance with 0 as the default guest /// physical starting address. pub fn new(mem: &'a M, len: u16) -> Self { Self::create(mem, GuestAddress(0), len) } /// Create a new `MockSplitQueue` instance. pub fn create(mem: &'a M, start: GuestAddress, len: u16) -> Self { const AVAIL_ALIGN: GuestUsize = 2; const USED_ALIGN: GuestUsize = 4; let desc_table_addr = start; let desc_table = DescriptorTable::new(mem, desc_table_addr, len); let avail_addr = start .checked_add(16 * len as GuestUsize) .unwrap() .align_up(AVAIL_ALIGN); let avail = AvailRing::new(mem, avail_addr, len); let used_addr = avail.end().align_up(USED_ALIGN); let used = UsedRing::new(mem, used_addr, len); let indirect_addr = GuestAddress(0x3000_0000); MockSplitQueue { mem, len, desc_table_addr, desc_table, avail_addr, avail, used_addr, used, indirect_addr, } } /// Return the starting address of the queue. pub fn start(&self) -> GuestAddress { self.desc_table_addr } /// Return the end address of the queue. pub fn end(&self) -> GuestAddress { self.used.end() } /// Descriptor table accessor. pub fn desc_table(&self) -> &DescriptorTable<'a, M> { &self.desc_table } /// Available ring accessor. pub fn avail(&self) -> &AvailRing { &self.avail } /// Used ring accessor. pub fn used(&self) -> &UsedRing { &self.used } /// Return the starting address of the descriptor table. pub fn desc_table_addr(&self) -> GuestAddress { self.desc_table_addr } /// Return the starting address of the available ring. pub fn avail_addr(&self) -> GuestAddress { self.avail_addr } /// Return the starting address of the used ring. pub fn used_addr(&self) -> GuestAddress { self.used_addr } fn update_avail_idx(&mut self, value: u16) -> Result<(), MockError> { let avail_idx = self.avail.idx.load(); self.avail.ring.ref_at(avail_idx as usize)?.store(value); self.avail.idx.store(avail_idx.wrapping_add(1)); Ok(()) } fn alloc_indirect_chain(&mut self, len: u16) -> Result { // To simplify things for now, we round up the table len as a multiple of 16. When this is // no longer the case, we should make sure the starting address of the descriptor table // we're creating below is properly aligned. let table_len = if len % 16 == 0 { len } else { 16 * (len / 16 + 1) }; let mut table = DescriptorTable::new(self.mem, self.indirect_addr, table_len); let head_decriptor_index = table.build_chain(len)?; // When building indirect descriptor tables, the descriptor at index 0 is supposed to be // first in the resulting chain. Just making sure our logic actually makes that happen. assert_eq!(head_decriptor_index, 0); let table_addr = self.indirect_addr; self.indirect_addr = self.indirect_addr.checked_add(table.total_size()).unwrap(); Ok(table_addr) } /// Add a descriptor chain to the table. pub fn add_chain(&mut self, len: u16) -> Result<(), MockError> { self.desc_table .build_chain(len) .and_then(|head_idx| self.update_avail_idx(head_idx)) } /// Add an indirect descriptor chain to the table. pub fn add_indirect_chain(&mut self, len: u16) -> Result<(), MockError> { let head_idx = self.desc_table.build_chain(1)?; // We just allocate the indirect table and forget about it for now. let indirect_addr = self.alloc_indirect_chain(len)?; let mut desc = self.desc_table.load(head_idx)?; desc.set_flags(VRING_DESC_F_INDIRECT as u16); desc.set_addr(indirect_addr.raw_value()); desc.set_len(u32::from(len) * size_of::() as u32); self.desc_table.store(head_idx, desc)?; self.update_avail_idx(head_idx) } /// Creates a new `Queue`, using the underlying memory regions represented /// by the `MockSplitQueue`. pub fn create_queue(&self) -> Result { let mut q = Q::new(self.len)?; q.set_size(self.len); q.set_ready(true); // we cannot directly set the u64 address, we need to compose it from low & high. q.set_desc_table_address( Some(self.desc_table_addr.0 as u32), Some((self.desc_table_addr.0 >> 32) as u32), ); q.set_avail_ring_address( Some(self.avail_addr.0 as u32), Some((self.avail_addr.0 >> 32) as u32), ); q.set_used_ring_address( Some(self.used_addr.0 as u32), Some((self.used_addr.0 >> 32) as u32), ); Ok(q) } /// Writes multiple descriptor chains to the memory object of the queue, at the beginning of /// the descriptor table, and returns the first `DescriptorChain` available. pub fn build_multiple_desc_chains( &self, descs: &[Descriptor], ) -> Result, MockError> { self.add_desc_chains(descs, 0)?; self.create_queue::() .map_err(MockError::InvalidQueueParams)? .iter(self.mem) .map_err(MockError::InvalidQueueParams)? .next() .ok_or(MockError::InvalidNextAvail) } /// Writes a single descriptor chain to the memory object of the queue, at the beginning of the /// descriptor table, and returns the associated `DescriptorChain` object. // This method ensures the next flags and values are set properly for the desired chain, but // keeps the other characteristics of the input descriptors (`addr`, `len`, other flags). // TODO: make this function work with a generic queue. For now that's not possible because // we cannot create the descriptor chain from an iterator as iterator is not implemented for // a generic T, just for `Queue`. pub fn build_desc_chain(&self, descs: &[Descriptor]) -> Result, MockError> { let mut modified_descs: Vec = Vec::with_capacity(descs.len()); for (idx, desc) in descs.iter().enumerate() { let (flags, next) = if idx == descs.len() - 1 { // Clear the NEXT flag if it was set. The value of the next field of the // Descriptor doesn't matter at this point. (desc.flags() & !VRING_DESC_F_NEXT as u16, 0) } else { // Ensure that the next flag is set and that we are referring the following // descriptor. This ignores any value is actually present in `desc.next`. (desc.flags() | VRING_DESC_F_NEXT as u16, idx as u16 + 1) }; modified_descs.push(Descriptor::new(desc.addr().0, desc.len(), flags, next)); } self.build_multiple_desc_chains(&modified_descs[..]) } /// Adds descriptor chains to the memory object of the queue. // `descs` represents a slice of `Descriptor` objects which are used to populate the chains, and // `offset` is the index in the descriptor table where the chains should be added. // The descriptor chain related information is written in memory starting with address 0. // The `addr` fields of the input descriptors should start at a sufficiently // greater location (i.e. 1MiB, or `0x10_0000`). pub fn add_desc_chains(&self, descs: &[Descriptor], offset: u16) -> Result<(), MockError> { let mut new_entries = 0; let avail_idx: u16 = self .mem .read_obj::(self.avail_addr().unchecked_add(2)) .map(u16::from_le) .map_err(MockError::GuestMem)?; for (idx, desc) in descs.iter().enumerate() { let i = idx as u16 + offset; self.desc_table().store(i, *desc)?; if idx == 0 || descs[idx - 1].flags() & VRING_DESC_F_NEXT as u16 != 1 { // Update the available ring position. self.mem .write_obj( u16::to_le(i), self.avail_addr().unchecked_add( VIRTQ_AVAIL_RING_HEADER_SIZE + (avail_idx + new_entries) as u64 * VIRTQ_AVAIL_ELEMENT_SIZE, ), ) .map_err(MockError::GuestMem)?; new_entries += 1; } } // Increment `avail_idx`. self.mem .write_obj( u16::to_le(avail_idx + new_entries), self.avail_addr().unchecked_add(2), ) .map_err(MockError::GuestMem)?; Ok(()) } } virtio-queue-0.11.0/src/queue.rs000064400000000000000000001643631046102023000146370ustar 00000000000000// Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. // Copyright (C) 2020-2021 Alibaba Cloud. All rights reserved. // Copyright © 2019 Intel Corporation. // Portions Copyright 2017 The Chromium OS Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE-BSD-3-Clause file. // // SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause use std::mem::size_of; use std::num::Wrapping; use std::ops::Deref; use std::sync::atomic::{fence, Ordering}; use vm_memory::{Address, Bytes, GuestAddress, GuestMemory}; use crate::defs::{ DEFAULT_AVAIL_RING_ADDR, DEFAULT_DESC_TABLE_ADDR, DEFAULT_USED_RING_ADDR, VIRTQ_AVAIL_ELEMENT_SIZE, VIRTQ_AVAIL_RING_HEADER_SIZE, VIRTQ_AVAIL_RING_META_SIZE, VIRTQ_USED_ELEMENT_SIZE, VIRTQ_USED_RING_HEADER_SIZE, VIRTQ_USED_RING_META_SIZE, }; use crate::{ error, Descriptor, DescriptorChain, Error, QueueGuard, QueueOwnedT, QueueState, QueueT, VirtqUsedElem, }; use virtio_bindings::bindings::virtio_ring::VRING_USED_F_NO_NOTIFY; /// The maximum queue size as defined in the Virtio Spec. pub const MAX_QUEUE_SIZE: u16 = 32768; /// Struct to maintain information and manipulate a virtio queue. /// /// # Example /// /// ```rust /// use virtio_queue::{Queue, QueueOwnedT, QueueT}; /// use vm_memory::{Bytes, GuestAddress, GuestAddressSpace, GuestMemoryMmap}; /// /// let m = GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); /// let mut queue = Queue::new(1024).unwrap(); /// /// // First, the driver sets up the queue; this set up is done via writes on the bus (PCI, MMIO). /// queue.set_size(8); /// queue.set_desc_table_address(Some(0x1000), None); /// queue.set_avail_ring_address(Some(0x2000), None); /// queue.set_used_ring_address(Some(0x3000), None); /// queue.set_event_idx(true); /// queue.set_ready(true); /// // The user should check if the queue is valid before starting to use it. /// assert!(queue.is_valid(&m)); /// /// // Here the driver would add entries in the available ring and then update the `idx` field of /// // the available ring (address = 0x2000 + 2). /// m.write_obj(3, GuestAddress(0x2002)); /// /// loop { /// queue.disable_notification(&m).unwrap(); /// /// // Consume entries from the available ring. /// while let Some(chain) = queue.iter(&m).unwrap().next() { /// // Process the descriptor chain, and then add an entry in the used ring and optionally /// // notify the driver. /// queue.add_used(&m, chain.head_index(), 0x100).unwrap(); /// /// if queue.needs_notification(&m).unwrap() { /// // Here we would notify the driver it has new entries in the used ring to consume. /// } /// } /// if !queue.enable_notification(&m).unwrap() { /// break; /// } /// } /// /// // We can reset the queue at some point. /// queue.reset(); /// // The queue should not be ready after reset. /// assert!(!queue.ready()); /// ``` #[derive(Debug, Default, PartialEq, Eq)] pub struct Queue { /// The maximum size in elements offered by the device. max_size: u16, /// Tail position of the available ring. next_avail: Wrapping, /// Head position of the used ring. next_used: Wrapping, /// VIRTIO_F_RING_EVENT_IDX negotiated. event_idx_enabled: bool, /// The number of descriptor chains placed in the used ring via `add_used` /// since the last time `needs_notification` was called on the associated queue. num_added: Wrapping, /// The queue size in elements the driver selected. size: u16, /// Indicates if the queue is finished with configuration. ready: bool, /// Guest physical address of the descriptor table. desc_table: GuestAddress, /// Guest physical address of the available ring. avail_ring: GuestAddress, /// Guest physical address of the used ring. used_ring: GuestAddress, } impl Queue { /// Equivalent of [`QueueT::set_size`] returning an error in case of invalid size. /// /// This should not be directly used, as the preferred method is part of the [`QueueT`] /// interface. This is a convenience function for implementing save/restore capabilities. pub fn try_set_size(&mut self, size: u16) -> Result<(), Error> { if size > self.max_size() || size == 0 || (size & (size - 1)) != 0 { return Err(Error::InvalidSize); } self.size = size; Ok(()) } /// Tries to set the descriptor table address. In case of an invalid value, the address is /// not updated. /// /// This should not be directly used, as the preferred method is /// [`QueueT::set_desc_table_address`]. This is a convenience function for implementing /// save/restore capabilities. pub fn try_set_desc_table_address(&mut self, desc_table: GuestAddress) -> Result<(), Error> { if desc_table.mask(0xf) != 0 { return Err(Error::InvalidDescTableAlign); } self.desc_table = desc_table; Ok(()) } /// Tries to update the available ring address. In case of an invalid value, the address is /// not updated. /// /// This should not be directly used, as the preferred method is /// [`QueueT::set_avail_ring_address`]. This is a convenience function for implementing /// save/restore capabilities. pub fn try_set_avail_ring_address(&mut self, avail_ring: GuestAddress) -> Result<(), Error> { if avail_ring.mask(0x1) != 0 { return Err(Error::InvalidAvailRingAlign); } self.avail_ring = avail_ring; Ok(()) } /// Tries to update the used ring address. In cae of an invalid value, the address is not /// updated. /// /// This should not be directly used, as the preferred method is /// [`QueueT::set_used_ring_address`]. This is a convenience function for implementing /// save/restore capabilities. pub fn try_set_used_ring_address(&mut self, used_ring: GuestAddress) -> Result<(), Error> { if used_ring.mask(0x3) != 0 { return Err(Error::InvalidUsedRingAlign); } self.used_ring = used_ring; Ok(()) } /// Returns the state of the `Queue`. /// /// This is useful for implementing save/restore capabilities. /// The state does not have support for serialization, but this can be /// added by VMMs locally through the use of a /// [remote type](https://serde.rs/remote-derive.html). /// /// Alternatively, a version aware and serializable/deserializable QueueState /// is available in the `virtio-queue-ser` crate. pub fn state(&self) -> QueueState { QueueState { max_size: self.max_size, next_avail: self.next_avail(), next_used: self.next_used(), event_idx_enabled: self.event_idx_enabled, size: self.size, ready: self.ready, desc_table: self.desc_table(), avail_ring: self.avail_ring(), used_ring: self.used_ring(), } } // Helper method that writes `val` to the `avail_event` field of the used ring, using // the provided ordering. fn set_avail_event( &self, mem: &M, val: u16, order: Ordering, ) -> Result<(), Error> { // This can not overflow an u64 since it is working with relatively small numbers compared // to u64::MAX. let avail_event_offset = VIRTQ_USED_RING_HEADER_SIZE + VIRTQ_USED_ELEMENT_SIZE * u64::from(self.size); let addr = self .used_ring .checked_add(avail_event_offset) .ok_or(Error::AddressOverflow)?; mem.store(u16::to_le(val), addr, order) .map_err(Error::GuestMemory) } // Set the value of the `flags` field of the used ring, applying the specified ordering. fn set_used_flags( &mut self, mem: &M, val: u16, order: Ordering, ) -> Result<(), Error> { mem.store(u16::to_le(val), self.used_ring, order) .map_err(Error::GuestMemory) } // Write the appropriate values to enable or disable notifications from the driver. // // Every access in this method uses `Relaxed` ordering because a fence is added by the caller // when appropriate. fn set_notification(&mut self, mem: &M, enable: bool) -> Result<(), Error> { if enable { if self.event_idx_enabled { // We call `set_avail_event` using the `next_avail` value, instead of reading // and using the current `avail_idx` to avoid missing notifications. More // details in `enable_notification`. self.set_avail_event(mem, self.next_avail.0, Ordering::Relaxed) } else { self.set_used_flags(mem, 0, Ordering::Relaxed) } } else if !self.event_idx_enabled { self.set_used_flags(mem, VRING_USED_F_NO_NOTIFY as u16, Ordering::Relaxed) } else { // Notifications are effectively disabled by default after triggering once when // `VIRTIO_F_EVENT_IDX` is negotiated, so we don't do anything in that case. Ok(()) } } // Return the value present in the used_event field of the avail ring. // // If the VIRTIO_F_EVENT_IDX feature bit is not negotiated, the flags field in the available // ring offers a crude mechanism for the driver to inform the device that it doesn’t want // interrupts when buffers are used. Otherwise virtq_avail.used_event is a more performant // alternative where the driver specifies how far the device can progress before interrupting. // // Neither of these interrupt suppression methods are reliable, as they are not synchronized // with the device, but they serve as useful optimizations. So we only ensure access to the // virtq_avail.used_event is atomic, but do not need to synchronize with other memory accesses. fn used_event(&self, mem: &M, order: Ordering) -> Result, Error> { // This can not overflow an u64 since it is working with relatively small numbers compared // to u64::MAX. let used_event_offset = VIRTQ_AVAIL_RING_HEADER_SIZE + u64::from(self.size) * VIRTQ_AVAIL_ELEMENT_SIZE; let used_event_addr = self .avail_ring .checked_add(used_event_offset) .ok_or(Error::AddressOverflow)?; mem.load(used_event_addr, order) .map(u16::from_le) .map(Wrapping) .map_err(Error::GuestMemory) } } impl<'a> QueueGuard<'a> for Queue { type G = &'a mut Self; } impl QueueT for Queue { fn new(max_size: u16) -> Result { // We need to check that the max size is a power of 2 because we're setting this as the // queue size, and the valid queue sizes are a power of 2 as per the specification. if max_size == 0 || max_size > MAX_QUEUE_SIZE || (max_size & (max_size - 1)) != 0 { return Err(Error::InvalidMaxSize); } Ok(Queue { max_size, size: max_size, ready: false, desc_table: GuestAddress(DEFAULT_DESC_TABLE_ADDR), avail_ring: GuestAddress(DEFAULT_AVAIL_RING_ADDR), used_ring: GuestAddress(DEFAULT_USED_RING_ADDR), next_avail: Wrapping(0), next_used: Wrapping(0), event_idx_enabled: false, num_added: Wrapping(0), }) } fn is_valid(&self, mem: &M) -> bool { let queue_size = self.size as u64; let desc_table = self.desc_table; // The multiplication can not overflow an u64 since we are multiplying an u16 with a // small number. let desc_table_size = size_of::() as u64 * queue_size; let avail_ring = self.avail_ring; // The operations below can not overflow an u64 since they're working with relatively small // numbers compared to u64::MAX. let avail_ring_size = VIRTQ_AVAIL_RING_META_SIZE + VIRTQ_AVAIL_ELEMENT_SIZE * queue_size; let used_ring = self.used_ring; let used_ring_size = VIRTQ_USED_RING_META_SIZE + VIRTQ_USED_ELEMENT_SIZE * queue_size; if !self.ready { error!("attempt to use virtio queue that is not marked ready"); false } else if desc_table .checked_add(desc_table_size) .map_or(true, |v| !mem.address_in_range(v)) { error!( "virtio queue descriptor table goes out of bounds: start:0x{:08x} size:0x{:08x}", desc_table.raw_value(), desc_table_size ); false } else if avail_ring .checked_add(avail_ring_size) .map_or(true, |v| !mem.address_in_range(v)) { error!( "virtio queue available ring goes out of bounds: start:0x{:08x} size:0x{:08x}", avail_ring.raw_value(), avail_ring_size ); false } else if used_ring .checked_add(used_ring_size) .map_or(true, |v| !mem.address_in_range(v)) { error!( "virtio queue used ring goes out of bounds: start:0x{:08x} size:0x{:08x}", used_ring.raw_value(), used_ring_size ); false } else { true } } fn reset(&mut self) { self.ready = false; self.size = self.max_size; self.desc_table = GuestAddress(DEFAULT_DESC_TABLE_ADDR); self.avail_ring = GuestAddress(DEFAULT_AVAIL_RING_ADDR); self.used_ring = GuestAddress(DEFAULT_USED_RING_ADDR); self.next_avail = Wrapping(0); self.next_used = Wrapping(0); self.num_added = Wrapping(0); self.event_idx_enabled = false; } fn lock(&mut self) -> ::G { self } fn max_size(&self) -> u16 { self.max_size } fn size(&self) -> u16 { self.size } fn set_size(&mut self, size: u16) { if self.try_set_size(size).is_err() { error!("virtio queue with invalid size: {}", size); } } fn ready(&self) -> bool { self.ready } fn set_ready(&mut self, ready: bool) { self.ready = ready; } fn set_desc_table_address(&mut self, low: Option, high: Option) { let low = low.unwrap_or(self.desc_table.0 as u32) as u64; let high = high.unwrap_or((self.desc_table.0 >> 32) as u32) as u64; let desc_table = GuestAddress((high << 32) | low); if self.try_set_desc_table_address(desc_table).is_err() { error!("virtio queue descriptor table breaks alignment constraints"); } } fn set_avail_ring_address(&mut self, low: Option, high: Option) { let low = low.unwrap_or(self.avail_ring.0 as u32) as u64; let high = high.unwrap_or((self.avail_ring.0 >> 32) as u32) as u64; let avail_ring = GuestAddress((high << 32) | low); if self.try_set_avail_ring_address(avail_ring).is_err() { error!("virtio queue available ring breaks alignment constraints"); } } fn set_used_ring_address(&mut self, low: Option, high: Option) { let low = low.unwrap_or(self.used_ring.0 as u32) as u64; let high = high.unwrap_or((self.used_ring.0 >> 32) as u32) as u64; let used_ring = GuestAddress((high << 32) | low); if self.try_set_used_ring_address(used_ring).is_err() { error!("virtio queue used ring breaks alignment constraints"); } } fn set_event_idx(&mut self, enabled: bool) { self.event_idx_enabled = enabled; } fn avail_idx(&self, mem: &M, order: Ordering) -> Result, Error> where M: GuestMemory + ?Sized, { let addr = self .avail_ring .checked_add(2) .ok_or(Error::AddressOverflow)?; mem.load(addr, order) .map(u16::from_le) .map(Wrapping) .map_err(Error::GuestMemory) } fn used_idx(&self, mem: &M, order: Ordering) -> Result, Error> { let addr = self .used_ring .checked_add(2) .ok_or(Error::AddressOverflow)?; mem.load(addr, order) .map(u16::from_le) .map(Wrapping) .map_err(Error::GuestMemory) } fn add_used( &mut self, mem: &M, head_index: u16, len: u32, ) -> Result<(), Error> { if head_index >= self.size { error!( "attempted to add out of bounds descriptor to used ring: {}", head_index ); return Err(Error::InvalidDescriptorIndex); } let next_used_index = u64::from(self.next_used.0 % self.size); // This can not overflow an u64 since it is working with relatively small numbers compared // to u64::MAX. let offset = VIRTQ_USED_RING_HEADER_SIZE + next_used_index * VIRTQ_USED_ELEMENT_SIZE; let addr = self .used_ring .checked_add(offset) .ok_or(Error::AddressOverflow)?; mem.write_obj(VirtqUsedElem::new(head_index.into(), len), addr) .map_err(Error::GuestMemory)?; self.next_used += Wrapping(1); self.num_added += Wrapping(1); mem.store( u16::to_le(self.next_used.0), self.used_ring .checked_add(2) .ok_or(Error::AddressOverflow)?, Ordering::Release, ) .map_err(Error::GuestMemory) } // TODO: Turn this into a doc comment/example. // With the current implementation, a common way of consuming entries from the available ring // while also leveraging notification suppression is to use a loop, for example: // // loop { // // We have to explicitly disable notifications if `VIRTIO_F_EVENT_IDX` has not been // // negotiated. // self.disable_notification()?; // // for chain in self.iter()? { // // Do something with each chain ... // // Let's assume we process all available chains here. // } // // // If `enable_notification` returns `true`, the driver has added more entries to the // // available ring. // if !self.enable_notification()? { // break; // } // } fn enable_notification(&mut self, mem: &M) -> Result { self.set_notification(mem, true)?; // Ensures the following read is not reordered before any previous write operation. fence(Ordering::SeqCst); // We double check here to avoid the situation where the available ring has been updated // just before we re-enabled notifications, and it's possible to miss one. We compare the // current `avail_idx` value to `self.next_avail` because it's where we stopped processing // entries. There are situations where we intentionally avoid processing everything in the // available ring (which will cause this method to return `true`), but in that case we'll // probably not re-enable notifications as we already know there are pending entries. self.avail_idx(mem, Ordering::Relaxed) .map(|idx| idx != self.next_avail) } fn disable_notification(&mut self, mem: &M) -> Result<(), Error> { self.set_notification(mem, false) } fn needs_notification(&mut self, mem: &M) -> Result { let used_idx = self.next_used; // Complete all the writes in add_used() before reading the event. fence(Ordering::SeqCst); // The VRING_AVAIL_F_NO_INTERRUPT flag isn't supported yet. // When the `EVENT_IDX` feature is negotiated, the driver writes into `used_event` // a value that's used by the device to determine whether a notification must // be submitted after adding a descriptor chain to the used ring. According to the // standard, the notification must be sent when `next_used == used_event + 1`, but // various device model implementations rely on an inequality instead, most likely // to also support use cases where a bunch of descriptor chains are added to the used // ring first, and only afterwards the `needs_notification` logic is called. For example, // the approach based on `num_added` below is taken from the Linux Kernel implementation // (i.e. https://elixir.bootlin.com/linux/v5.15.35/source/drivers/virtio/virtio_ring.c#L661) // The `old` variable below is used to determine the value of `next_used` from when // `needs_notification` was called last (each `needs_notification` call resets `num_added` // to zero, while each `add_used` called increments it by one). Then, the logic below // uses wrapped arithmetic to see whether `used_event` can be found between `old` and // `next_used` in the circular sequence space of the used ring. if self.event_idx_enabled { let used_event = self.used_event(mem, Ordering::Relaxed)?; let old = used_idx - self.num_added; self.num_added = Wrapping(0); return Ok(used_idx - used_event - Wrapping(1) < used_idx - old); } Ok(true) } fn next_avail(&self) -> u16 { self.next_avail.0 } fn set_next_avail(&mut self, next_avail: u16) { self.next_avail = Wrapping(next_avail); } fn next_used(&self) -> u16 { self.next_used.0 } fn set_next_used(&mut self, next_used: u16) { self.next_used = Wrapping(next_used); } fn desc_table(&self) -> u64 { self.desc_table.0 } fn avail_ring(&self) -> u64 { self.avail_ring.0 } fn used_ring(&self) -> u64 { self.used_ring.0 } fn event_idx_enabled(&self) -> bool { self.event_idx_enabled } fn pop_descriptor_chain(&mut self, mem: M) -> Option> where M: Clone + Deref, M::Target: GuestMemory, { // Default, iter-based impl. Will be subsequently improved. match self.iter(mem) { Ok(mut iter) => iter.next(), Err(e) => { error!("Iterator error {}", e); None } } } } impl QueueOwnedT for Queue { fn iter(&mut self, mem: M) -> Result, Error> where M: Deref, M::Target: GuestMemory, { // We're checking here that a reset did not happen without re-initializing the queue. // TODO: In the future we might want to also check that the other parameters in the // queue are valid. if !self.ready || self.avail_ring == GuestAddress(0) { return Err(Error::QueueNotReady); } self.avail_idx(mem.deref(), Ordering::Acquire) .map(move |idx| AvailIter::new(mem, idx, self))? } fn go_to_previous_position(&mut self) { self.next_avail -= Wrapping(1); } } /// Consuming iterator over all available descriptor chain heads in the queue. /// /// # Example /// /// ```rust /// # use virtio_bindings::bindings::virtio_ring::{VRING_DESC_F_NEXT, VRING_DESC_F_WRITE}; /// # use virtio_queue::mock::MockSplitQueue; /// use virtio_queue::{Descriptor, Queue, QueueOwnedT}; /// use vm_memory::{GuestAddress, GuestMemoryMmap}; /// /// # fn populate_queue(m: &GuestMemoryMmap) -> Queue { /// # let vq = MockSplitQueue::new(m, 16); /// # let mut q: Queue = vq.create_queue().unwrap(); /// # /// # // The chains are (0, 1), (2, 3, 4) and (5, 6). /// # let mut descs = Vec::new(); /// # for i in 0..7 { /// # let flags = match i { /// # 1 | 6 => 0, /// # 2 | 5 => VRING_DESC_F_NEXT | VRING_DESC_F_WRITE, /// # 4 => VRING_DESC_F_WRITE, /// # _ => VRING_DESC_F_NEXT, /// # }; /// # /// # descs.push(Descriptor::new((0x1000 * (i + 1)) as u64, 0x1000, flags as u16, i + 1)); /// # } /// # /// # vq.add_desc_chains(&descs, 0).unwrap(); /// # q /// # } /// let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); /// // Populate the queue with descriptor chains and update the available ring accordingly. /// let mut queue = populate_queue(m); /// let mut i = queue.iter(m).unwrap(); /// /// { /// let mut c = i.next().unwrap(); /// let _first_head_index = c.head_index(); /// // We should have two descriptors in the first chain. /// let _desc1 = c.next().unwrap(); /// let _desc2 = c.next().unwrap(); /// } /// /// { /// let c = i.next().unwrap(); /// let _second_head_index = c.head_index(); /// /// let mut iter = c.writable(); /// // We should have two writable descriptors in the second chain. /// let _desc1 = iter.next().unwrap(); /// let _desc2 = iter.next().unwrap(); /// } /// /// { /// let c = i.next().unwrap(); /// let _third_head_index = c.head_index(); /// /// let mut iter = c.readable(); /// // We should have one readable descriptor in the third chain. /// let _desc1 = iter.next().unwrap(); /// } /// // Let's go back one position in the available ring. /// i.go_to_previous_position(); /// // We should be able to access again the third descriptor chain. /// let c = i.next().unwrap(); /// let _third_head_index = c.head_index(); /// ``` #[derive(Debug)] pub struct AvailIter<'b, M> { mem: M, desc_table: GuestAddress, avail_ring: GuestAddress, queue_size: u16, last_index: Wrapping, next_avail: &'b mut Wrapping, } impl<'b, M> AvailIter<'b, M> where M: Deref, M::Target: GuestMemory, { /// Create a new instance of `AvailInter`. /// /// # Arguments /// * `mem` - the `GuestMemory` object that can be used to access the queue buffers. /// * `idx` - the index of the available ring entry where the driver would put the next /// available descriptor chain. /// * `queue` - the `Queue` object from which the needed data to create the `AvailIter` can /// be retrieved. pub(crate) fn new(mem: M, idx: Wrapping, queue: &'b mut Queue) -> Result { // The number of descriptor chain heads to process should always // be smaller or equal to the queue size, as the driver should // never ask the VMM to process a available ring entry more than // once. Checking and reporting such incorrect driver behavior // can prevent potential hanging and Denial-of-Service from // happening on the VMM side. if (idx - queue.next_avail).0 > queue.size { return Err(Error::InvalidAvailRingIndex); } Ok(AvailIter { mem, desc_table: queue.desc_table, avail_ring: queue.avail_ring, queue_size: queue.size, last_index: idx, next_avail: &mut queue.next_avail, }) } /// Goes back one position in the available descriptor chain offered by the driver. /// /// Rust does not support bidirectional iterators. This is the only way to revert the effect /// of an iterator increment on the queue. /// /// Note: this method assumes there's only one thread manipulating the queue, so it should only /// be invoked in single-threaded context. pub fn go_to_previous_position(&mut self) { *self.next_avail -= Wrapping(1); } } impl<'b, M> Iterator for AvailIter<'b, M> where M: Clone + Deref, M::Target: GuestMemory, { type Item = DescriptorChain; fn next(&mut self) -> Option { if *self.next_avail == self.last_index { return None; } // These two operations can not overflow an u64 since they're working with relatively small // numbers compared to u64::MAX. let elem_off = u64::from(self.next_avail.0.checked_rem(self.queue_size)?) * VIRTQ_AVAIL_ELEMENT_SIZE; let offset = VIRTQ_AVAIL_RING_HEADER_SIZE + elem_off; let addr = self.avail_ring.checked_add(offset)?; let head_index: u16 = self .mem .load(addr, Ordering::Acquire) .map(u16::from_le) .map_err(|_| error!("Failed to read from memory {:x}", addr.raw_value())) .ok()?; *self.next_avail += Wrapping(1); Some(DescriptorChain::new( self.mem.clone(), self.desc_table, self.queue_size, head_index, )) } } #[cfg(any(test, feature = "test-utils"))] // It is convenient for tests to implement `PartialEq`, but it is not a // proper implementation as `GuestMemory` errors cannot implement `PartialEq`. impl PartialEq for Error { fn eq(&self, other: &Self) -> bool { format!("{}", &self) == format!("{}", other) } } #[cfg(test)] mod tests { use super::*; use crate::defs::{DEFAULT_AVAIL_RING_ADDR, DEFAULT_DESC_TABLE_ADDR, DEFAULT_USED_RING_ADDR}; use crate::mock::MockSplitQueue; use crate::Descriptor; use virtio_bindings::bindings::virtio_ring::{ VRING_DESC_F_NEXT, VRING_DESC_F_WRITE, VRING_USED_F_NO_NOTIFY, }; use vm_memory::{Address, Bytes, GuestAddress, GuestMemoryMmap}; #[test] fn test_queue_is_valid() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let mut q: Queue = vq.create_queue().unwrap(); // q is currently valid assert!(q.is_valid(m)); // shouldn't be valid when not marked as ready q.set_ready(false); assert!(!q.ready()); assert!(!q.is_valid(m)); q.set_ready(true); // shouldn't be allowed to set a size > max_size q.set_size(q.max_size() << 1); assert_eq!(q.size, q.max_size()); // or set the size to 0 q.set_size(0); assert_eq!(q.size, q.max_size()); // or set a size which is not a power of 2 q.set_size(11); assert_eq!(q.size, q.max_size()); // but should be allowed to set a size if 0 < size <= max_size and size is a power of two q.set_size(4); assert_eq!(q.size, 4); q.size = q.max_size(); // shouldn't be allowed to set an address that breaks the alignment constraint q.set_desc_table_address(Some(0xf), None); assert_eq!(q.desc_table.0, vq.desc_table_addr().0); // should be allowed to set an aligned out of bounds address q.set_desc_table_address(Some(0xffff_fff0), None); assert_eq!(q.desc_table.0, 0xffff_fff0); // but shouldn't be valid assert!(!q.is_valid(m)); // but should be allowed to set a valid description table address q.set_desc_table_address(Some(0x10), None); assert_eq!(q.desc_table.0, 0x10); assert!(q.is_valid(m)); let addr = vq.desc_table_addr().0; q.set_desc_table_address(Some(addr as u32), Some((addr >> 32) as u32)); // shouldn't be allowed to set an address that breaks the alignment constraint q.set_avail_ring_address(Some(0x1), None); assert_eq!(q.avail_ring.0, vq.avail_addr().0); // should be allowed to set an aligned out of bounds address q.set_avail_ring_address(Some(0xffff_fffe), None); assert_eq!(q.avail_ring.0, 0xffff_fffe); // but shouldn't be valid assert!(!q.is_valid(m)); // but should be allowed to set a valid available ring address q.set_avail_ring_address(Some(0x2), None); assert_eq!(q.avail_ring.0, 0x2); assert!(q.is_valid(m)); let addr = vq.avail_addr().0; q.set_avail_ring_address(Some(addr as u32), Some((addr >> 32) as u32)); // shouldn't be allowed to set an address that breaks the alignment constraint q.set_used_ring_address(Some(0x3), None); assert_eq!(q.used_ring.0, vq.used_addr().0); // should be allowed to set an aligned out of bounds address q.set_used_ring_address(Some(0xffff_fffc), None); assert_eq!(q.used_ring.0, 0xffff_fffc); // but shouldn't be valid assert!(!q.is_valid(m)); // but should be allowed to set a valid used ring address q.set_used_ring_address(Some(0x4), None); assert_eq!(q.used_ring.0, 0x4); let addr = vq.used_addr().0; q.set_used_ring_address(Some(addr as u32), Some((addr >> 32) as u32)); assert!(q.is_valid(m)); } #[test] fn test_add_used() { let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(mem, 16); let mut q: Queue = vq.create_queue().unwrap(); assert_eq!(q.used_idx(mem, Ordering::Acquire).unwrap(), Wrapping(0)); assert_eq!(u16::from_le(vq.used().idx().load()), 0); // index too large assert!(q.add_used(mem, 16, 0x1000).is_err()); assert_eq!(u16::from_le(vq.used().idx().load()), 0); // should be ok q.add_used(mem, 1, 0x1000).unwrap(); assert_eq!(q.next_used, Wrapping(1)); assert_eq!(q.used_idx(mem, Ordering::Acquire).unwrap(), Wrapping(1)); assert_eq!(u16::from_le(vq.used().idx().load()), 1); let x = vq.used().ring().ref_at(0).unwrap().load(); assert_eq!(x.id(), 1); assert_eq!(x.len(), 0x1000); } #[test] fn test_reset_queue() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let mut q: Queue = vq.create_queue().unwrap(); q.set_size(8); // The address set by `MockSplitQueue` for the descriptor table is DEFAULT_DESC_TABLE_ADDR, // so let's change it for testing the reset. q.set_desc_table_address(Some(0x5000), None); // Same for `event_idx_enabled`, `next_avail` `next_used` and `signalled_used`. q.set_event_idx(true); q.set_next_avail(2); q.set_next_used(4); q.num_added = Wrapping(15); assert_eq!(q.size, 8); // `create_queue` also marks the queue as ready. assert!(q.ready); assert_ne!(q.desc_table, GuestAddress(DEFAULT_DESC_TABLE_ADDR)); assert_ne!(q.avail_ring, GuestAddress(DEFAULT_AVAIL_RING_ADDR)); assert_ne!(q.used_ring, GuestAddress(DEFAULT_USED_RING_ADDR)); assert_ne!(q.next_avail, Wrapping(0)); assert_ne!(q.next_used, Wrapping(0)); assert_ne!(q.num_added, Wrapping(0)); assert!(q.event_idx_enabled); q.reset(); assert_eq!(q.size, 16); assert!(!q.ready); assert_eq!(q.desc_table, GuestAddress(DEFAULT_DESC_TABLE_ADDR)); assert_eq!(q.avail_ring, GuestAddress(DEFAULT_AVAIL_RING_ADDR)); assert_eq!(q.used_ring, GuestAddress(DEFAULT_USED_RING_ADDR)); assert_eq!(q.next_avail, Wrapping(0)); assert_eq!(q.next_used, Wrapping(0)); assert_eq!(q.num_added, Wrapping(0)); assert!(!q.event_idx_enabled); } #[test] fn test_needs_notification() { let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let qsize = 16; let vq = MockSplitQueue::new(mem, qsize); let mut q: Queue = vq.create_queue().unwrap(); let avail_addr = vq.avail_addr(); // It should always return true when EVENT_IDX isn't enabled. for i in 0..qsize { q.next_used = Wrapping(i); assert!(q.needs_notification(mem).unwrap()); } mem.write_obj::( u16::to_le(4), avail_addr.unchecked_add(4 + qsize as u64 * 2), ) .unwrap(); q.set_event_idx(true); // Incrementing up to this value causes an `u16` to wrap back to 0. let wrap = u32::from(u16::MAX) + 1; for i in 0..wrap + 12 { q.next_used = Wrapping(i as u16); // Let's test wrapping around the maximum index value as well. // `num_added` needs to be at least `1` to represent the fact that new descriptor // chains have be added to the used ring since the last time `needs_notification` // returned. q.num_added = Wrapping(1); let expected = i == 5 || i == (5 + wrap); assert_eq!((q.needs_notification(mem).unwrap(), i), (expected, i)); } mem.write_obj::( u16::to_le(8), avail_addr.unchecked_add(4 + qsize as u64 * 2), ) .unwrap(); // Returns `false` because the current `used_event` value is behind both `next_used` and // the value of `next_used` at the time when `needs_notification` last returned (which is // computed based on `num_added` as described in the comments for `needs_notification`. assert!(!q.needs_notification(mem).unwrap()); mem.write_obj::( u16::to_le(15), avail_addr.unchecked_add(4 + qsize as u64 * 2), ) .unwrap(); q.num_added = Wrapping(1); assert!(!q.needs_notification(mem).unwrap()); q.next_used = Wrapping(15); q.num_added = Wrapping(1); assert!(!q.needs_notification(mem).unwrap()); q.next_used = Wrapping(16); q.num_added = Wrapping(1); assert!(q.needs_notification(mem).unwrap()); // Calling `needs_notification` again immediately returns `false`. assert!(!q.needs_notification(mem).unwrap()); mem.write_obj::( u16::to_le(u16::MAX - 3), avail_addr.unchecked_add(4 + qsize as u64 * 2), ) .unwrap(); q.next_used = Wrapping(u16::MAX - 2); q.num_added = Wrapping(1); // Returns `true` because, when looking at circular sequence of indices of the used ring, // the value we wrote in the `used_event` appears between the "old" value of `next_used` // (i.e. `next_used` - `num_added`) and the current `next_used`, thus suggesting that we // need to notify the driver. assert!(q.needs_notification(mem).unwrap()); } #[test] fn test_enable_disable_notification() { let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(mem, 16); let mut q: Queue = vq.create_queue().unwrap(); let used_addr = vq.used_addr(); assert!(!q.event_idx_enabled); q.enable_notification(mem).unwrap(); let v = mem.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, 0); q.disable_notification(mem).unwrap(); let v = mem.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, VRING_USED_F_NO_NOTIFY as u16); q.enable_notification(mem).unwrap(); let v = mem.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, 0); q.set_event_idx(true); let avail_addr = vq.avail_addr(); mem.write_obj::(u16::to_le(2), avail_addr.unchecked_add(2)) .unwrap(); assert!(q.enable_notification(mem).unwrap()); q.next_avail = Wrapping(2); assert!(!q.enable_notification(mem).unwrap()); mem.write_obj::(u16::to_le(8), avail_addr.unchecked_add(2)) .unwrap(); assert!(q.enable_notification(mem).unwrap()); q.next_avail = Wrapping(8); assert!(!q.enable_notification(mem).unwrap()); } #[test] fn test_consume_chains_with_notif() { let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(mem, 16); let mut q: Queue = vq.create_queue().unwrap(); // q is currently valid. assert!(q.is_valid(mem)); // The chains are (0, 1), (2, 3, 4), (5, 6), (7, 8), (9, 10, 11, 12). let mut descs = Vec::new(); for i in 0..13 { let flags = match i { 1 | 4 | 6 | 8 | 12 => 0, _ => VRING_DESC_F_NEXT, }; descs.push(Descriptor::new( (0x1000 * (i + 1)) as u64, 0x1000, flags as u16, i + 1, )); } vq.add_desc_chains(&descs, 0).unwrap(); // Update the index of the chain that can be consumed to not be the last one. // This enables us to consume chains in multiple iterations as opposed to consuming // all the driver written chains at once. vq.avail().idx().store(u16::to_le(2)); // No descriptor chains are consumed at this point. assert_eq!(q.next_avail(), 0); let mut i = 0; loop { i += 1; q.disable_notification(mem).unwrap(); while let Some(chain) = q.iter(mem).unwrap().next() { // Process the descriptor chain, and then add entries to the // used ring. let head_index = chain.head_index(); let mut desc_len = 0; chain.for_each(|d| { if d.flags() as u32 & VRING_DESC_F_WRITE == VRING_DESC_F_WRITE { desc_len += d.len(); } }); q.add_used(mem, head_index, desc_len).unwrap(); } if !q.enable_notification(mem).unwrap() { break; } } // The chains should be consumed in a single loop iteration because there's nothing updating // the `idx` field of the available ring in the meantime. assert_eq!(i, 1); // The next chain that can be consumed should have index 2. assert_eq!(q.next_avail(), 2); assert_eq!(q.next_used(), 2); // Let the device know it can consume one more chain. vq.avail().idx().store(u16::to_le(3)); i = 0; loop { i += 1; q.disable_notification(mem).unwrap(); while let Some(chain) = q.iter(mem).unwrap().next() { // Process the descriptor chain, and then add entries to the // used ring. let head_index = chain.head_index(); let mut desc_len = 0; chain.for_each(|d| { if d.flags() as u32 & VRING_DESC_F_WRITE == VRING_DESC_F_WRITE { desc_len += d.len(); } }); q.add_used(mem, head_index, desc_len).unwrap(); } // For the simplicity of the test we are updating here the `idx` value of the available // ring. Ideally this should be done on a separate thread. // Because of this update, the loop should be iterated again to consume the new // available descriptor chains. vq.avail().idx().store(u16::to_le(4)); if !q.enable_notification(mem).unwrap() { break; } } assert_eq!(i, 2); // The next chain that can be consumed should have index 4. assert_eq!(q.next_avail(), 4); assert_eq!(q.next_used(), 4); // Set an `idx` that is bigger than the number of entries added in the ring. // This is an allowed scenario, but the indexes of the chain will have unexpected values. vq.avail().idx().store(u16::to_le(7)); loop { q.disable_notification(mem).unwrap(); while let Some(chain) = q.iter(mem).unwrap().next() { // Process the descriptor chain, and then add entries to the // used ring. let head_index = chain.head_index(); let mut desc_len = 0; chain.for_each(|d| { if d.flags() as u32 & VRING_DESC_F_WRITE == VRING_DESC_F_WRITE { desc_len += d.len(); } }); q.add_used(mem, head_index, desc_len).unwrap(); } if !q.enable_notification(mem).unwrap() { break; } } assert_eq!(q.next_avail(), 7); assert_eq!(q.next_used(), 7); } #[test] fn test_invalid_avail_idx() { // This is a negative test for the following MUST from the spec: `A driver MUST NOT // decrement the available idx on a virtqueue (ie. there is no way to “unexpose” buffers).`. // We validate that for this misconfiguration, the device does not panic. let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(mem, 16); let mut q: Queue = vq.create_queue().unwrap(); // q is currently valid. assert!(q.is_valid(mem)); // The chains are (0, 1), (2, 3, 4), (5, 6). let mut descs = Vec::new(); for i in 0..7 { let flags = match i { 1 | 4 | 6 => 0, _ => VRING_DESC_F_NEXT, }; descs.push(Descriptor::new( (0x1000 * (i + 1)) as u64, 0x1000, flags as u16, i + 1, )); } vq.add_desc_chains(&descs, 0).unwrap(); // Let the device know it can consume chains with the index < 2. vq.avail().idx().store(u16::to_le(3)); // No descriptor chains are consumed at this point. assert_eq!(q.next_avail(), 0); assert_eq!(q.next_used(), 0); loop { q.disable_notification(mem).unwrap(); while let Some(chain) = q.iter(mem).unwrap().next() { // Process the descriptor chain, and then add entries to the // used ring. let head_index = chain.head_index(); let mut desc_len = 0; chain.for_each(|d| { if d.flags() as u32 & VRING_DESC_F_WRITE == VRING_DESC_F_WRITE { desc_len += d.len(); } }); q.add_used(mem, head_index, desc_len).unwrap(); } if !q.enable_notification(mem).unwrap() { break; } } // The next chain that can be consumed should have index 3. assert_eq!(q.next_avail(), 3); assert_eq!(q.avail_idx(mem, Ordering::Acquire).unwrap(), Wrapping(3)); assert_eq!(q.next_used(), 3); assert_eq!(q.used_idx(mem, Ordering::Acquire).unwrap(), Wrapping(3)); assert!(q.lock().ready()); // Decrement `idx` which should be forbidden. We don't enforce this thing, but we should // test that we don't panic in case the driver decrements it. vq.avail().idx().store(u16::to_le(1)); // Invalid available ring index assert!(q.iter(mem).is_err()); } #[test] fn test_iterator_and_avail_idx() { // This test ensures constructing a descriptor chain iterator succeeds // with valid available ring indexes while produces an error with invalid // indexes. let queue_size = 2; let mem = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(mem, queue_size); let mut q: Queue = vq.create_queue().unwrap(); // q is currently valid. assert!(q.is_valid(mem)); // Create descriptors to fill up the queue let mut descs = Vec::new(); for i in 0..queue_size { descs.push(Descriptor::new( (0x1000 * (i + 1)) as u64, 0x1000, 0_u16, i + 1, )); } vq.add_desc_chains(&descs, 0).unwrap(); // Set the 'next_available' index to 'u16:MAX' to test the wrapping scenarios q.set_next_avail(u16::MAX); // When the number of chains exposed by the driver is equal to or less than the queue // size, the available ring index is valid and constructs an iterator successfully. let avail_idx = Wrapping(q.next_avail()) + Wrapping(queue_size); vq.avail().idx().store(u16::to_le(avail_idx.0)); assert!(q.iter(mem).is_ok()); let avail_idx = Wrapping(q.next_avail()) + Wrapping(queue_size - 1); vq.avail().idx().store(u16::to_le(avail_idx.0)); assert!(q.iter(mem).is_ok()); // When the number of chains exposed by the driver is larger than the queue size, the // available ring index is invalid and produces an error from constructing an iterator. let avail_idx = Wrapping(q.next_avail()) + Wrapping(queue_size + 1); vq.avail().idx().store(u16::to_le(avail_idx.0)); assert!(q.iter(mem).is_err()); } #[test] fn test_descriptor_and_iterator() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let mut q: Queue = vq.create_queue().unwrap(); // q is currently valid assert!(q.is_valid(m)); // the chains are (0, 1), (2, 3, 4) and (5, 6) let mut descs = Vec::new(); for j in 0..7 { let flags = match j { 1 | 6 => 0, 2 | 5 => VRING_DESC_F_NEXT | VRING_DESC_F_WRITE, 4 => VRING_DESC_F_WRITE, _ => VRING_DESC_F_NEXT, }; descs.push(Descriptor::new( (0x1000 * (j + 1)) as u64, 0x1000, flags as u16, j + 1, )); } vq.add_desc_chains(&descs, 0).unwrap(); let mut i = q.iter(m).unwrap(); { let c = i.next().unwrap(); assert_eq!(c.head_index(), 0); let mut iter = c; assert!(iter.next().is_some()); assert!(iter.next().is_some()); assert!(iter.next().is_none()); assert!(iter.next().is_none()); } { let c = i.next().unwrap(); assert_eq!(c.head_index(), 2); let mut iter = c.writable(); assert!(iter.next().is_some()); assert!(iter.next().is_some()); assert!(iter.next().is_none()); assert!(iter.next().is_none()); } { let c = i.next().unwrap(); assert_eq!(c.head_index(), 5); let mut iter = c.readable(); assert!(iter.next().is_some()); assert!(iter.next().is_none()); assert!(iter.next().is_none()); } } #[test] fn test_iterator() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 16); let mut q: Queue = vq.create_queue().unwrap(); q.size = q.max_size; q.desc_table = vq.desc_table_addr(); q.avail_ring = vq.avail_addr(); q.used_ring = vq.used_addr(); assert!(q.is_valid(m)); { // an invalid queue should return an iterator with no next q.ready = false; assert!(q.iter(m).is_err()); } q.ready = true; // now let's create two simple descriptor chains // the chains are (0, 1) and (2, 3, 4) { let mut descs = Vec::new(); for j in 0..5u16 { let flags = match j { 1 | 4 => 0, _ => VRING_DESC_F_NEXT, }; descs.push(Descriptor::new( (0x1000 * (j + 1)) as u64, 0x1000, flags as u16, j + 1, )); } vq.add_desc_chains(&descs, 0).unwrap(); let mut i = q.iter(m).unwrap(); { let mut c = i.next().unwrap(); assert_eq!(c.head_index(), 0); c.next().unwrap(); assert!(c.next().is_some()); assert!(c.next().is_none()); assert_eq!(c.head_index(), 0); } { let mut c = i.next().unwrap(); assert_eq!(c.head_index(), 2); c.next().unwrap(); c.next().unwrap(); c.next().unwrap(); assert!(c.next().is_none()); assert_eq!(c.head_index(), 2); } // also test go_to_previous_position() works as expected { assert!(i.next().is_none()); i.go_to_previous_position(); let mut c = q.iter(m).unwrap().next().unwrap(); c.next().unwrap(); c.next().unwrap(); c.next().unwrap(); assert!(c.next().is_none()); } } // Test that iterating some broken descriptor chain does not exceed // 2^32 bytes in total (VIRTIO spec version 1.2, 2.7.5.2: // Drivers MUST NOT add a descriptor chain longer than 2^32 bytes in // total) { let descs = vec![ Descriptor::new(0x1000, 0xffff_ffff, VRING_DESC_F_NEXT as u16, 1), Descriptor::new(0x1000, 0x1234_5678, 0, 2), ]; vq.add_desc_chains(&descs, 0).unwrap(); let mut yielded_bytes_by_iteration = 0_u32; for d in q.iter(m).unwrap().next().unwrap() { yielded_bytes_by_iteration = yielded_bytes_by_iteration .checked_add(d.len()) .expect("iterator should not yield more than 2^32 bytes"); } } // Same as above, but test with a descriptor which is self-referential { let descs = vec![Descriptor::new( 0x1000, 0xffff_ffff, VRING_DESC_F_NEXT as u16, 0, )]; vq.add_desc_chains(&descs, 0).unwrap(); let mut yielded_bytes_by_iteration = 0_u32; for d in q.iter(m).unwrap().next().unwrap() { yielded_bytes_by_iteration = yielded_bytes_by_iteration .checked_add(d.len()) .expect("iterator should not yield more than 2^32 bytes"); } } } #[test] fn test_regression_iterator_division() { // This is a regression test that tests that the iterator does not try to divide // by 0 when the queue size is 0 let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 1); // This input was generated by the fuzzer, both for the QueueS and the Descriptor let descriptors: Vec = vec![Descriptor::new( 14178673876262995140, 3301229764, 50372, 50372, )]; vq.build_desc_chain(&descriptors).unwrap(); let mut q = Queue { max_size: 38, next_avail: Wrapping(0), next_used: Wrapping(0), event_idx_enabled: false, num_added: Wrapping(0), size: 0, ready: false, desc_table: GuestAddress(12837708984796196), avail_ring: GuestAddress(0), used_ring: GuestAddress(9943947977301164032), }; assert!(q.pop_descriptor_chain(m).is_none()); } #[test] fn test_setters_error_cases() { assert_eq!(Queue::new(15).unwrap_err(), Error::InvalidMaxSize); let mut q = Queue::new(16).unwrap(); let expected_val = q.desc_table.0; assert_eq!( q.try_set_desc_table_address(GuestAddress(0xf)).unwrap_err(), Error::InvalidDescTableAlign ); assert_eq!(q.desc_table(), expected_val); let expected_val = q.avail_ring.0; assert_eq!( q.try_set_avail_ring_address(GuestAddress(0x1)).unwrap_err(), Error::InvalidAvailRingAlign ); assert_eq!(q.avail_ring(), expected_val); let expected_val = q.used_ring.0; assert_eq!( q.try_set_used_ring_address(GuestAddress(0x3)).unwrap_err(), Error::InvalidUsedRingAlign ); assert_eq!(q.used_ring(), expected_val); let expected_val = q.size; assert_eq!(q.try_set_size(15).unwrap_err(), Error::InvalidSize); assert_eq!(q.size(), expected_val) } #[test] // This is a regression test for a fuzzing finding. If the driver requests a reset of the // device, but then does not re-initializes the queue then a subsequent call to process // a request should yield no descriptors to process. Before this fix we were processing // descriptors that were added to the queue before, and we were ending up processing 255 // descriptors per chain. fn test_regression_timeout_after_reset() { // The input below was generated by libfuzzer and adapted for this test. let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0x0), 0x10000)]).unwrap(); let vq = MockSplitQueue::new(m, 1024); // This input below was generated by the fuzzer. let descriptors: Vec = vec![ Descriptor::new(21508325467, 0, 1, 4), Descriptor::new(2097152, 4096, 3, 0), Descriptor::new(18374686479672737792, 4294967295, 65535, 29), Descriptor::new(76842670169653248, 1114115, 0, 0), Descriptor::new(16, 983040, 126, 3), Descriptor::new(897648164864, 0, 0, 0), Descriptor::new(111669149722, 0, 0, 0), ]; vq.build_multiple_desc_chains(&descriptors).unwrap(); let mut q: Queue = vq.create_queue().unwrap(); // Setting the queue to ready should not allow consuming descriptors after reset. q.reset(); q.set_ready(true); let mut counter = 0; while let Some(mut desc_chain) = q.pop_descriptor_chain(m) { // this empty loop is here to check that there are no side effects // in terms of memory & execution time. while desc_chain.next().is_some() { counter += 1; } } assert_eq!(counter, 0); // Setting the avail_addr to valid should not allow consuming descriptors after reset. q.reset(); q.set_avail_ring_address(Some(0x1000), None); assert_eq!(q.avail_ring, GuestAddress(0x1000)); counter = 0; while let Some(mut desc_chain) = q.pop_descriptor_chain(m) { // this empty loop is here to check that there are no side effects // in terms of memory & execution time. while desc_chain.next().is_some() { counter += 1; } } assert_eq!(counter, 0); } } virtio-queue-0.11.0/src/queue_sync.rs000064400000000000000000000255131046102023000156640ustar 00000000000000// Copyright (C) 2021 Alibaba Cloud. All rights reserved. // // SPDX-License-Identifier: Apache-2.0 AND BSD-3-Clause use std::num::Wrapping; use std::ops::Deref; use std::sync::atomic::Ordering; use std::sync::{Arc, Mutex, MutexGuard}; use vm_memory::GuestMemory; use crate::{DescriptorChain, Error, Queue, QueueGuard, QueueT}; /// Struct to maintain information and manipulate state of a virtio queue for multi-threaded /// context. /// /// # Example /// /// ```rust /// use virtio_queue::{Queue, QueueSync, QueueT}; /// use vm_memory::{Bytes, GuestAddress, GuestAddressSpace, GuestMemoryMmap}; /// /// let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); /// let mut queue = QueueSync::new(1024).unwrap(); /// /// // First, the driver sets up the queue; this set up is done via writes on the bus (PCI, MMIO). /// queue.set_size(8); /// queue.set_desc_table_address(Some(0x1000), None); /// queue.set_avail_ring_address(Some(0x2000), None); /// queue.set_used_ring_address(Some(0x3000), None); /// queue.set_ready(true); /// // The user should check if the queue is valid before starting to use it. /// assert!(queue.is_valid(m.memory())); /// /// // The memory object is not embedded in the `QueueSync`, so we have to pass it as a /// // parameter to the methods that access the guest memory. Examples would be: /// queue.add_used(m.memory(), 1, 0x100).unwrap(); /// queue.needs_notification(m.memory()).unwrap(); /// ``` #[derive(Clone, Debug)] pub struct QueueSync { state: Arc>, } impl QueueSync { fn lock_state(&self) -> MutexGuard { // Do not expect poisoned lock. self.state.lock().unwrap() } } impl<'a> QueueGuard<'a> for QueueSync { type G = MutexGuard<'a, Queue>; } impl QueueT for QueueSync { fn new(max_size: u16) -> Result { Ok(QueueSync { state: Arc::new(Mutex::new(Queue::new(max_size)?)), }) } fn is_valid(&self, mem: &M) -> bool { self.lock_state().is_valid(mem) } fn reset(&mut self) { self.lock_state().reset(); } fn lock(&mut self) -> ::G { self.lock_state() } fn max_size(&self) -> u16 { self.lock_state().max_size() } fn size(&self) -> u16 { self.lock_state().size() } fn set_size(&mut self, size: u16) { self.lock_state().set_size(size); } fn ready(&self) -> bool { self.lock_state().ready() } fn set_ready(&mut self, ready: bool) { self.lock_state().set_ready(ready) } fn set_desc_table_address(&mut self, low: Option, high: Option) { self.lock_state().set_desc_table_address(low, high); } fn set_avail_ring_address(&mut self, low: Option, high: Option) { self.lock_state().set_avail_ring_address(low, high); } fn set_used_ring_address(&mut self, low: Option, high: Option) { self.lock_state().set_used_ring_address(low, high); } fn set_event_idx(&mut self, enabled: bool) { self.lock_state().set_event_idx(enabled); } fn avail_idx(&self, mem: &M, order: Ordering) -> Result, Error> where M: GuestMemory + ?Sized, { self.lock_state().avail_idx(mem, order) } fn used_idx(&self, mem: &M, order: Ordering) -> Result, Error> { self.lock_state().used_idx(mem, order) } fn add_used( &mut self, mem: &M, head_index: u16, len: u32, ) -> Result<(), Error> { self.lock_state().add_used(mem, head_index, len) } fn enable_notification(&mut self, mem: &M) -> Result { self.lock_state().enable_notification(mem) } fn disable_notification(&mut self, mem: &M) -> Result<(), Error> { self.lock_state().disable_notification(mem) } fn needs_notification(&mut self, mem: &M) -> Result { self.lock_state().needs_notification(mem) } fn next_avail(&self) -> u16 { self.lock_state().next_avail() } fn set_next_avail(&mut self, next_avail: u16) { self.lock_state().set_next_avail(next_avail); } fn next_used(&self) -> u16 { self.lock_state().next_used() } fn set_next_used(&mut self, next_used: u16) { self.lock_state().set_next_used(next_used); } fn desc_table(&self) -> u64 { self.lock_state().desc_table() } fn avail_ring(&self) -> u64 { self.lock_state().avail_ring() } fn used_ring(&self) -> u64 { self.lock_state().used_ring() } fn event_idx_enabled(&self) -> bool { self.lock_state().event_idx_enabled() } fn pop_descriptor_chain(&mut self, mem: M) -> Option> where M: Clone + Deref, M::Target: GuestMemory, { self.lock_state().pop_descriptor_chain(mem) } } #[cfg(test)] mod tests { use super::*; use crate::defs::{DEFAULT_AVAIL_RING_ADDR, DEFAULT_DESC_TABLE_ADDR, DEFAULT_USED_RING_ADDR}; use std::sync::Barrier; use virtio_bindings::bindings::virtio_ring::VRING_USED_F_NO_NOTIFY; use vm_memory::{Address, Bytes, GuestAddress, GuestAddressSpace, GuestMemoryMmap}; #[test] fn test_queue_state_sync() { let mut q = QueueSync::new(0x1000).unwrap(); let mut q2 = q.clone(); let q3 = q.clone(); let barrier = Arc::new(Barrier::new(3)); let b2 = barrier.clone(); let b3 = barrier.clone(); let t1 = std::thread::spawn(move || { { let guard = q2.lock(); assert!(!guard.ready()); } b2.wait(); b2.wait(); { let guard = q2.lock(); assert!(guard.ready()); } }); let t2 = std::thread::spawn(move || { assert!(!q3.ready()); b3.wait(); b3.wait(); assert!(q3.ready()); }); barrier.wait(); q.set_ready(true); barrier.wait(); t1.join().unwrap(); t2.join().unwrap(); } #[test] fn test_state_sync_add_used() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let mut q = QueueSync::new(0x100).unwrap(); q.set_desc_table_address(Some(0x1000), None); q.set_avail_ring_address(Some(0x2000), None); q.set_used_ring_address(Some(0x3000), None); q.set_event_idx(true); q.set_ready(true); assert!(q.is_valid(m.memory())); assert_eq!(q.lock().size(), 0x100); assert_eq!(q.max_size(), 0x100); assert_eq!(q.size(), 0x100); q.set_size(0x80); assert_eq!(q.size(), 0x80); assert_eq!(q.max_size(), 0x100); q.set_next_avail(5); assert_eq!(q.next_avail(), 5); q.set_next_used(3); assert_eq!(q.next_used(), 3); assert_eq!( q.avail_idx(m.memory(), Ordering::Acquire).unwrap(), Wrapping(0) ); assert_eq!( q.used_idx(m.memory(), Ordering::Acquire).unwrap(), Wrapping(0) ); assert_eq!(q.next_used(), 3); // index too large assert!(q.add_used(m.memory(), 0x200, 0x1000).is_err()); assert_eq!(q.next_used(), 3); // should be ok q.add_used(m.memory(), 1, 0x1000).unwrap(); assert_eq!(q.next_used(), 4); assert_eq!( q.used_idx(m.memory(), Ordering::Acquire).unwrap(), Wrapping(4) ); } #[test] fn test_sync_state_reset_queue() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let mut q = QueueSync::new(0x100).unwrap(); q.set_desc_table_address(Some(0x1000), None); q.set_avail_ring_address(Some(0x2000), None); q.set_used_ring_address(Some(0x3000), None); q.set_event_idx(true); q.set_next_avail(2); q.set_next_used(2); q.set_size(0x8); q.set_ready(true); assert!(q.is_valid(m.memory())); q.needs_notification(m.memory()).unwrap(); assert_eq!(q.lock_state().size(), 0x8); assert!(q.lock_state().ready()); assert_ne!(q.lock_state().desc_table(), DEFAULT_DESC_TABLE_ADDR); assert_ne!(q.lock_state().avail_ring(), DEFAULT_AVAIL_RING_ADDR); assert_ne!(q.lock_state().used_ring(), DEFAULT_USED_RING_ADDR); assert_ne!(q.lock_state().next_avail(), 0); assert_ne!(q.lock_state().next_used(), 0); assert!(q.lock_state().event_idx_enabled()); q.reset(); assert_eq!(q.lock_state().size(), 0x100); assert!(!q.lock_state().ready()); assert_eq!(q.lock_state().desc_table(), DEFAULT_DESC_TABLE_ADDR); assert_eq!(q.lock_state().avail_ring(), DEFAULT_AVAIL_RING_ADDR); assert_eq!(q.lock_state().used_ring(), DEFAULT_USED_RING_ADDR); assert_eq!(q.lock_state().next_avail(), 0); assert_eq!(q.lock_state().next_used(), 0); assert!(!q.lock_state().event_idx_enabled()); } #[test] fn test_enable_disable_notification() { let m = &GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0), 0x10000)]).unwrap(); let mem = m.memory(); let mut q = QueueSync::new(0x100).unwrap(); q.set_desc_table_address(Some(0x1000), None); assert_eq!(q.desc_table(), 0x1000); q.set_avail_ring_address(Some(0x2000), None); assert_eq!(q.avail_ring(), 0x2000); q.set_used_ring_address(Some(0x3000), None); assert_eq!(q.used_ring(), 0x3000); q.set_ready(true); assert!(q.is_valid(mem)); let used_addr = GuestAddress(q.lock_state().used_ring()); assert!(!q.event_idx_enabled()); q.enable_notification(mem).unwrap(); let v = m.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, 0); q.disable_notification(m.memory()).unwrap(); let v = m.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, VRING_USED_F_NO_NOTIFY as u16); q.enable_notification(mem).unwrap(); let v = m.read_obj::(used_addr).map(u16::from_le).unwrap(); assert_eq!(v, 0); q.set_event_idx(true); let avail_addr = GuestAddress(q.lock_state().avail_ring()); m.write_obj::(u16::to_le(2), avail_addr.unchecked_add(2)) .unwrap(); assert!(q.enable_notification(mem).unwrap()); q.lock_state().set_next_avail(2); assert!(!q.enable_notification(mem).unwrap()); m.write_obj::(u16::to_le(8), avail_addr.unchecked_add(2)) .unwrap(); assert!(q.enable_notification(mem).unwrap()); q.lock_state().set_next_avail(8); assert!(!q.enable_notification(mem).unwrap()); } } virtio-queue-0.11.0/src/state.rs000064400000000000000000000100761046102023000146220ustar 00000000000000use crate::{Error, Queue, QueueT}; use vm_memory::GuestAddress; /// Representation of the `Queue` state. /// /// The `QueueState` represents the pure state of the `queue` without tracking any implementation /// details of the queue. The goal with this design is to minimize the changes required to the /// state, and thus the required transitions between states when upgrading or downgrading. /// /// In practice this means that the `QueueState` consists solely of POD (Plain Old Data). /// /// As this structure has all the fields public it is consider to be untrusted. A validated /// queue can be created from the state by calling the associated `try_from` function. #[derive(Clone, Copy, Debug, Default, PartialEq, Eq)] pub struct QueueState { /// The maximum size in elements offered by the device. pub max_size: u16, /// Tail position of the available ring. pub next_avail: u16, /// Head position of the used ring. pub next_used: u16, /// VIRTIO_F_RING_EVENT_IDX negotiated. pub event_idx_enabled: bool, /// The queue size in elements the driver selected. pub size: u16, /// Indicates if the queue is finished with configuration. pub ready: bool, /// Guest physical address of the descriptor table. pub desc_table: u64, /// Guest physical address of the available ring. pub avail_ring: u64, /// Guest physical address of the used ring. pub used_ring: u64, } impl TryFrom for Queue { type Error = Error; fn try_from(q_state: QueueState) -> Result { let mut q = Queue::new(q_state.max_size)?; q.set_next_avail(q_state.next_avail); q.set_next_used(q_state.next_used); q.set_event_idx(q_state.event_idx_enabled); q.try_set_size(q_state.size)?; q.set_ready(q_state.ready); q.try_set_desc_table_address(GuestAddress(q_state.desc_table))?; q.try_set_avail_ring_address(GuestAddress(q_state.avail_ring))?; q.try_set_used_ring_address(GuestAddress(q_state.used_ring))?; Ok(q) } } #[cfg(test)] mod tests { use super::*; fn create_valid_queue_state() -> QueueState { let queue = Queue::new(16).unwrap(); queue.state() } #[test] fn test_empty_queue_state() { let max_size = 16; let queue = Queue::new(max_size).unwrap(); // Saving the state of a queue on which we didn't do any operation is ok. // Same for restore. let queue_state = queue.state(); let restored_q = Queue::try_from(queue_state).unwrap(); assert_eq!(queue, restored_q); } #[test] fn test_invalid_queue_state() { // Let's generate a state that we know is valid so we can just alter one field at a time. let mut q_state = create_valid_queue_state(); // Test invalid max_size. // Size too small. q_state.max_size = 0; assert!(Queue::try_from(q_state).is_err()); // Size too big. q_state.max_size = u16::MAX; assert!(Queue::try_from(q_state).is_err()); // Size not a power of 2. q_state.max_size = 15; assert!(Queue::try_from(q_state).is_err()); // Test invalid size. let mut q_state = create_valid_queue_state(); // Size too small. q_state.size = 0; assert!(Queue::try_from(q_state).is_err()); // Size too big. q_state.size = u16::MAX; assert!(Queue::try_from(q_state).is_err()); // Size not a power of 2. q_state.size = 15; assert!(Queue::try_from(q_state).is_err()); // Test invalid desc_table. let mut q_state = create_valid_queue_state(); q_state.desc_table = 0xf; assert!(Queue::try_from(q_state).is_err()); // Test invalid avail_ring. let mut q_state = create_valid_queue_state(); q_state.avail_ring = 0x1; assert!(Queue::try_from(q_state).is_err()); // Test invalid used_ring. let mut q_state = create_valid_queue_state(); q_state.used_ring = 0x3; assert!(Queue::try_from(q_state).is_err()); } }