concread-0.4.6/.cargo_vcs_info.json0000644000000001360000000000100126210ustar { "git": { "sha1": "7624f5058a8cbc03f331d39cee81b4f6fcadd4ea" }, "path_in_vcs": "" }concread-0.4.6/.github/workflows/rust_test.yml000064400000000000000000000005561046102023000175730ustar 00000000000000name: "Rust Test" # Trigger the workflow on push to master or pull request "on": push: pull_request: jobs: rust_test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Rust uses: actions-rs/toolchain@v1.0.6 with: toolchain: stable - name: Cargo test run: cargo test --features tcache concread-0.4.6/.github/workflows/shellcheck.yml000064400000000000000000000004011046102023000176310ustar 00000000000000name: Shellcheck "on": push: pull_request: jobs: shellcheck: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Run ShellCheck uses: ludeeus/action-shellcheck@master env: SHELLCHECK_OPTS: -e SC2148 concread-0.4.6/.gitignore000064400000000000000000000000501046102023000133740ustar 00000000000000/target **/*.rs.bk Cargo.lock .DS_Store concread-0.4.6/CACHE.md000064400000000000000000000552011046102023000125410ustar 00000000000000 Concurrent Adaptive Replacement Cache William Brown, SUSE Labs wbrown@suse.de 2020-04-23 # Concurrent Adaptive Replacement Cache Caching is a very important aspect of computing, helping to improve the performance of applications based on various work factors. Modern systems (especially servers) are highly concurrent, so it is important that we are able to provide caching strategies for concurrent systems. However, to date the majority of work in concurrent data-structures has focused on lock free or systems that do not require transactional guarantees. While this is applicable in many cases, it does not suit all applications, limiting many applications to mutexs or read-write locks around existing data-structures. In this document, I will present a strategy to create a concurrently readable adaptive replacement cache. This cache guarantees temporal consistency, as well as providing ACID guarantees, and serialisable properties of the cache, while maintaining the invariants that define an adaptive replacement cache. Additionally, this cache is able to support multiple concurrent readers with isolated transactions, and serialised writers. This algorithm also has interested side effects with regard to observing cache interactions, allowing it to have more accurate retention behaviour of cached data. ## Concurrent Systems To understand the reasons why a concurrent cache are relevant requires exploration of modern concurrent systems. Modern CPU's while masquerading as a simple device that executes instructions in order, are actually concurrent, out of order task execution machines, with asynchronous memory views. At any point in time this means a CPU's view of memory will be either "latest" or any point within a timeline into the past [fn 0]. This is due to each CPU's cache being independent to the caches of every other CPU in a system, both within a die package and across a NUMA boundary. Updates to the memory from one CPU, requires coordination based on the MESI [r 0]. Each cache maintains it's own MESI state machine, and they coordinate through inter-processor communication when required. Examining MESI, it becomes apparent that it's best to keep cache-lines from becoming `Invalid` as this state causes the highest level of inter processor communication to validate the content of the cache which adds significant delays to any operation. It becomes simple to see that the fastest lifecycle for memory should be that new memory is allocated or used, which is set to `Exclusive`/`Modified`. Once all writeable actions have completed, the memory may be accessed by other cores which moves the state to `Shared`. Only once all cores have completed their operations, the memory is freed. At no point does it move through the `Invalid` state, which has the highest penalties for inter-processor communication. This means that once a value becomes `Shared` it can not be modified again in-place. Knowing this, with highly parallel systems, we want to determine a method of keeping application level memory in these states that helps to improve our systems performance. These states are similar to concurrent readability, a property of systems where a single writer exists with many parallel readers, but the parallel readers have a guaranteed snapshot (transaction) of memory. This can be achieved through copy on write, which matches closely to the model of MESI states we want to achieve. As a result, applications that will benefit highly from these designs are: * High concurrency read oriented * Transactional ## Single Thread Adaptive Replacement Cache ARC (Adaptive Replacement Cache) is a design proposed by IBM [r 1, 2], which improves upon strategies like least replacement used. ARC was used with great success in ZFS [r 3] for caching of objects into memory. A traditional LRU uses a single DLL (double linked list) with a fast lookup structure such as a hash-map to offset into nodes of the DLL. ARC uses 4 DLL's, with lists for frequent, recent, ghost frequent and ghost recent items to be stored, as well as a weight factor `p`. The operation of `p` is one of the most important aspects of how ARC operates. `p` is initially set to 0, and only increases toward positive numbers or back toward 0. `p` represents "demand on the recent set". Initially, there is no demand on the recent set. We start with an empty cache (of max size 4) and `p=0`. We add 4 elements, which are added to the recent set (as they do not exist in any other set at this time). We then add another 4 items. This displaces the original 4 element's keys to the ghost-recent set. The 4 elements in the main cache are now accessed again. This promotes them from recent to frequent. We then attempt to read from one element that in the ghost-recent set. We re-include the element into the cache, causing there to be 3 elements in the frequent set, and 1 in the recent set. Since the key was found in the ghost-recent set this indicates demand, so `p` is adjusted to 1. Assuming this process happened again, this causes another element to be included in recent, so `p` adjusts again to 2. If an item in the recent set now is touched again, it would be promoted to frequent, displacing an item, so that frequent and recent both stay at size 2 (there is an empty slot now in recent). An access to a new value, would be included in recent as there is a free slot available. If we then miss on an item that was excluded from frequent by seeing it in the ghost set, this indicates demand on frequent items, so `p` is reduced by 1. This would continue until `p` is 0. It is important to note that the ghost set sizes are also affected by `p`. When `p` is 0, the ghost-frequent size is 0, along with the recent set size of 0. The size of ghost-recent and frequent must therefore be cache max. As p increases, ghost-frequent and recent are increased in size, while ghost-recent and frequent decrease. This way as items are evicted and `p` shifts, we do not have a set of infinite items that may cause evictions or `p` shifts unexpectedly.

With the cache always adapting `p` between recent inclusions and high frequency accesses, the cache is able to satisfy a variety of workloads, and other research [r 7] has shown it has a better hit rate then many other cache strategies, for a low overhead of administration. Additionally, it is resistant to a number of cache invalidation/poisoning patterns such as repeated hits on a single item causing items to never be evicted (affects LFU) or scans of large sets causing complete eviction (affects LRU). To explain, an item that is in the frequent set that is accessed many times in an LFU, would not be able to be evicted as it's counter has become very high and another item would need as many hits to displace it - this may cause stale data to remain in the cache. However in ARC, if a frequent item is accessed many times, but then different data becomes accessed highly, this would cause demand on recent and then promotions to frequent, which would quickly displace the item that previously was accessed many times. In terms of resistance to scans, LRU on a scan would have many items evicted and included rapidly. However in ARC, because many items may be invalidated and included, only the items in the ghost-recent set would cause a change in the weight of `p`, so any other items stored in the frequent set would not be evicted due to the scan. This makes ARC an attractive choice for an application cache, and as mentioned has already been proven through it's use in systems like ZFS, and even the Linux Memory Management subsystem has considered ARC viable [r 8]. However, due to the presence of multiple linked lists, and the updates required such as moving items from one set to another, it's non possible to do this with atomic CPU instructions. To remain a consistent data-structure, these changes must only be performed by a single thread. This poses a problem to modern concurrent systems. A single thread cache behind a mutex or read write lock becomes and obvious serialisation point, and having a cache per thread eliminates much of the value of shared CPU caches as each CPU would have to have duplicated content - and thus smaller per thread caches which may hinder large datasets from effective caching. ## Concurrent B+Tree A concurrently readable B+tree design exists, and is used in btrfs [r 4]. The design of this is so that any update copies only the affected tree node, and the nodes on the path to the leaf. This means that on a tree with a large number of nodes, a single write only requires a copy of a minimal set of nodes. For example, given a tree where each node has 7 descendants, and the tree has 823543 nodes (4941258 key value pairs), to update a single node only requires 6 nodes to be copied.

This copy on write property as previously described has a valuable property that if we preserve previous tree roots, they remain valid and whole trees, where new roots point to their own complete and valid tree. This allows readers across multiple transaction generations to have stable and consistent views to a key-value set, while allowing parallel writes to continue to occur in the tree. Additionally, because nodes are cloned, they are never removed from the `Shared` cache state, with the new content becoming `Exclusive`/`Modified` until the new root is committed - at which time initial readers will begin to see the new root and its value. The number of nodes that require inclusion to perceive a new tree version is minimal, as many older nodes remain identical and in their `Shared` state. This has excellent behaviours for filesystems, but additionally, it's useful for databases as well. Wired Tiger [r 5] uses a concurrent tree to achieve high performance for mongo db. A question that arises is if this excess copying of data could cause excess cache invalidation as there are multiple copies of the data now existing. This is certainly true that the copying would have to invalidate some items of the CPU cache, however, those items would be the least recent used items of the cache, but also that during a write the work-set would remain resident in the CPU's L1 cache, without affecting the cache's of other CPUs. As mentioned the value of a copy on write structure is that the number of updates required relative to the size of the dataset is small, which also will limit the amount of CPU cache invalidation occurring. ## Concurrent ARC Design To create a concurrently readable cache, some constraints must be defined. * Readers must always have a correct "point in time" view of the cache and its data * Readers must be able to trigger cache inclusions * Readers must be able to track cache hits accurately * Readers are isolated from all other readers and writer actions * Writers must always have a correct "point in time" view * Writers must be able to rollback changes without penalty * Writers must be able to trigger cache inclusions * Writers must be able to track cache hits accurately * Writers are isolated from all readers * The cache must maintain correct temporal ordering of items in the cache * The cache must properly update hit and inclusions based on readers and writers * The cache must provide ARC semantics for management of items * The cache must be concurrently readable and transactional * The overhead compared to single thread ARC is minimal To clarify why these requirements are important - while it may seem obvious that readers and writers must be able to track inclusions and hits correctly, when we compare to a read/write lock, if the cache was in a read lock we could not alter the state of the ARC, meaning that tracking information is lost - as we want to have many concurrent readers, it is important we track their access patterns, as readers represent the majority of demand on our cache system. In order to satisfy these requirements, an extended structure is needed to allow asynchronous communication between the readers/writer and the ARC layer, due to ARC not being thread safe (double linked lists). A multiple producer - single consumer (mpsc) queue is added to the primary structure, which allows readers to send their data to the writer asynchronously, so that reader events can be acted on. Additionally, an extra linked list called "haunted" is added to track keys that have existed in the set. This creates the following pseudo structures: Arc { Mutex<{ p, freq_list, rec_list, ghost_freq_list, ghost_rec_list, haunted_list, rx_channel, }>, cache_set, max_capacity, tx_channel, } ArcRead { cache_set_ro, thread_local_set, tx_channel, timestamp, } ArcWrite { cache_set_rw, thread_local_set, hit_array, } The majority of the challenge is during the writer commit. To understand this we need to understand what the readers and writers are doing and how they communicate to the commit phase.

A reader acts like a normal cache - on a request it attempts to find the item in its thread local set. If it is found, we return the item. If it is not found, we attempt to search the read only cache set. Again if found we return, else we indicate we do not have the item. On a hit the channel is issued a `Hit(timestamp, K)`, notifying the writer that a thread has the intent to access the item K at timestamp. If the reader misses, the caller may choose to include the item into the reader transaction. This is achieved by adding the item to the thread local set, allowing each reader to build a small thread local set of items relevant to that operation. In addition, when an item is added to the thread local set, an inclusion message is sent to the channel, consisting of `Inc(K, V, transaction_id, timestamp)`. This transaction id is from the read only cache transaction that is occurring.

At the end of the read operation, the thread local set is discarded - any included items have been sent via the channel already. This allows long running readers to influence the commits of shorter reader cycles, so that other readers that may spawn can benefit from the inclusions of this reader. The writer acts in a very similar manner. During its operation, cache misses are stored into the thread local set, and hit's are store in the hit array. Dirty items (new, or modified) may be stored in the thread local set. By searching the thread local set first, we will always return items that are relevant to this operation that may have been dirtied by the current thread.

A writer does *not* alter the properties of the Arc during its operation - this is critical, as it allows the writer to be rolled back at any time, without affecting the state of the cache set or the frequency lists. While we may lose the details of the hit array from the writer, this is not a significant loss of data in my view, as the readers hit data matters more. It is during the commit of a writer, when the caller has confirmed that they want the changes in the writer to now persist and be visible to future writers that we perform all cache management actions. ### Commit Phase During the commit of the writer we perform a number of steps to update the Arc state. First the commit phase notes the current monotonic timestamp and the current transaction id of the writer. The commit then drains the complete writer thread local state into the main cache, updating the cache item states as each item is implied as a hit or inclusion event. Each item's transaction id is updated to the transaction id of the writer.

Next the commit drains from the mpsc channel until it is empty or the hit or include items timestamp exceeds the timestamp from the start of the commit operation. This exists so that a commit will not drain forever on a busy read cache, only updating the cache to the point in time at which the commit phase began. Items from the channel are included only if their transaction id is equal to or greater than the transaction id of the item existing in the cache. If the transaction id is lower, this acts as a hit instead of an inclusion to affect the weightings of the caches.

This detail, where items are only updated if their transaction id is greater or equal is one of the most important to maintain temporal consistency, and is the reason for the existence of the new haunted set. The haunted set maintains the transaction id of all keys when they were evicted from the cache. Consider the following series of events. Reader Begins tx1 writer begins tx2 writer alters E writer commit reader begins tx3 reader quiesce -> Evict E reader reads E reader sends Inc(E) reader quiesce (*) In this sequence, as the reader at tx1 issues the inclusion, its view of entry E would be older than the actual DB state of E - this would cause the inclusion of an outdated entry at the last reader quiesce point, corrupting the data and losing changes. With the haunted set, the key to item E would be in the haunted set from tx3, causing the include of E from tx1 to be tracked as a hit rather than an include. It is for this reason, that all changes to the cache must be tracked by transaction order, and that the cache must track all items ever included in the haunted set to understand at what transaction id they were last observed in the cache to preserve temporal ordering and consistency. The commit then drains the writer hit set into the cache. This is because the writer exists somewhat in time after the readers, so it's an approximation of temporal ordering of events, and gives weight to the written items in the cache from being evicted suddenly.

Finally, the caches are evicted to their relevant sizes based on the updates to the p weight factor. All items that are evicted are sent to the haunted set with the current transaction id to protect them from incorrect inclusion in the future.

## Side Effects of this Algorithm An interesting side effect of this delayed inclusion, and the batched eviction in the commit phase is that the cache now has temporal observability, where we can have the effects from many threads at many points in time updating the cache and its items. This gives the cache a limited clairvoyance effect [6], so that items may not be evicted then rapidly included, rather remaining in the cache, and that items that are evicted are less demanded by not just this thread, but by the set of threads contributing to the channel and cache updates. However, this comes at a cost - this algorithm is extremely memory intensive. A mutex cache will always maintain a near perfect amount of items in memory based on the requested max. This system will regularly, and almost always exceed that. This is because the included items in the queue are untracked, each thread has a thread local set, the haunted set must keep all keys ever perceived and that during the commit phase items are not evicted until the entire state is known causing ballooning. As a result implementors or deployments may need to reduce the cache size to prevent exceeding memory limits. On a modern system however, this may not be a concern in many cases however. Future changes could be to use bounded channels, and to have the channel drop items during high memory pressure or high levels of include events, but this weakens the ability of the cache to have a clairvoyant effect. Another option could be to split the hit and inclusion channels such that the hits are unbounded due to their small size, and that it is the hit tracking that gives us the insight into what items should be prioritised. Currently the thread sets are also unbounded, and these could be bounded to help reduce issues. ## Relevant Applications Applications which have a read-mostly, and serialised writes will benefit highly from this design. Some examples are: * LDAP * Key-Value store databases * Web-server File Caching ## Acknowledgements * Ilias Stamatiss (Reviewer) ## References * 0 - https://en.wikipedia.org/wiki/MESI_protocol * 1 - https://web.archive.org/web/20100329071954/http://www.almaden.ibm.com/StorageSystems/projects/arc/ * 2 - https://www.usenix.org/system/files/login/articles/1180-Megiddo.pdf * 3 - https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ * 4 - https://domino.research.ibm.com/library/cyberdig.nsf/papers/6E1C5B6A1B6EDD9885257A38006B6130/$File/rj10501.pdf * 5 - http://www.wiredtiger.com/ * 6 - https://en.wikipedia.org/wiki/Cache_replacement_policies#B%C3%A9l%C3%A1dy's_algorithm * 7 - http://www.cs.biu.ac.il/~wiseman/2os/2os/os2.pdf * 8 - https://linux-mm.org/AdvancedPageReplacement ## Footnote * 0 - It is yet unknown to the author if it is possible to have a CPU with the capability of predicting the future content of memory and being able to cache that reliably, but I'm sure that someone is trying to develop it. concread-0.4.6/CODE_OF_CONDUCT.md000064400000000000000000000061401046102023000142110ustar 00000000000000## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Our Standards Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others’ private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at: * william at blackhats.net.au All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership. ## Attribution This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html concread-0.4.6/CONTRIBUTORS.md000064400000000000000000000002071046102023000136670ustar 00000000000000## Author * William Brown (Firstyear): william@blackhats.net.au ## Contributors * Jake (slipperyBishop) * Youngsuk Kim (@JOE1994) concread-0.4.6/Cargo.toml0000644000000047530000000000100106300ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" name = "concread" version = "0.4.6" authors = ["William Brown "] description = "Concurrently Readable Data-Structures for Rust" homepage = "https://github.com/kanidm/concread/" documentation = "https://docs.rs/concread/latest/concread/" readme = "README.md" keywords = [ "concurrency", "lru", "mvcc", "copy-on-write", "transactional-memory", ] categories = [ "data-structures", "memory-management", "caching", "concurrency", ] license = "MPL-2.0" repository = "https://github.com/kanidm/concread/" [lib] name = "concread" path = "src/lib.rs" [[bench]] name = "hashmap_benchmark" harness = false [[bench]] name = "arccache" harness = false [dependencies.ahash] version = "0.8" optional = true [dependencies.crossbeam-epoch] version = "0.9.11" optional = true [dependencies.crossbeam-queue] version = "0.3.6" optional = true [dependencies.crossbeam-utils] version = "0.8.12" optional = true [dependencies.lru] version = "0.12" optional = true [dependencies.serde] version = "1.0" optional = true [dependencies.smallvec] version = "1.4" optional = true [dependencies.sptr] version = "0.3" [dependencies.tokio] version = "1" features = ["sync"] optional = true [dependencies.tracing] version = "0.1" [dev-dependencies.criterion] version = "0.3" features = ["html_reports"] [dev-dependencies.function_name] version = "0.3" [dev-dependencies.rand] version = "0.8" [dev-dependencies.serde_json] version = "1.0" [dev-dependencies.time] version = "0.3" [dev-dependencies.tokio] version = "1" features = [ "rt", "macros", ] [dev-dependencies.tracing-subscriber] version = "0.3" features = [ "env-filter", "std", "fmt", ] [dev-dependencies.uuid] version = "1.0" [features] arcache = [ "maps", "lru", "crossbeam-queue", ] asynch = ["tokio"] default = [ "asynch", "ahash", "ebr", "maps", "arcache", ] ebr = ["crossbeam-epoch"] hashtrie_skinny = [] maps = [ "crossbeam-utils", "smallvec", ] skinny = [] tcache = [] concread-0.4.6/Cargo.toml.orig000064400000000000000000000034421046102023000143030ustar 00000000000000[package] name = "concread" version = "0.4.6" authors = ["William Brown "] edition = "2021" description = "Concurrently Readable Data-Structures for Rust" documentation = "https://docs.rs/concread/latest/concread/" homepage = "https://github.com/kanidm/concread/" repository = "https://github.com/kanidm/concread/" readme = "README.md" keywords = ["concurrency", "lru", "mvcc", "copy-on-write", "transactional-memory"] categories = ["data-structures", "memory-management", "caching", "concurrency"] license = "MPL-2.0" [lib] name = "concread" path = "src/lib.rs" [features] # Features to add/remove contents. asynch = ["tokio"] ebr = ["crossbeam-epoch"] maps = ["crossbeam-utils", "smallvec"] tcache = [] arcache = ["maps", "lru", "crossbeam-queue"] # ahash is another feature here # Internal features for tweaking some alighn/perf behaviours. skinny = [] hashtrie_skinny = [] default = ["asynch", "ahash", "ebr", "maps", "arcache"] [dependencies] ahash = { version = "0.8", optional = true } crossbeam-utils = { version = "0.8.12", optional = true } crossbeam-epoch = { version = "0.9.11", optional = true } crossbeam-queue = { version = "0.3.6", optional = true } lru = { version = "0.12", optional = true } serde = { version = "1.0", optional = true } smallvec = { version = "1.4", optional = true } sptr = "0.3" tokio = { version = "1", features = ["sync"], optional = true } tracing = "0.1" [dev-dependencies] criterion = { version = "0.3", features = ["html_reports"] } rand = "0.8" time = "0.3" tracing-subscriber = { version = "0.3", features = ["env-filter", "std", "fmt"] } uuid = "1.0" function_name = "0.3" serde_json = "1.0" tokio = { version = "1", features = ["rt", "macros"] } [[bench]] name = "hashmap_benchmark" harness = false [[bench]] name = "arccache" harness = false concread-0.4.6/LICENSE.md000064400000000000000000000405271046102023000130250ustar 00000000000000Mozilla Public License Version 2.0 ================================== 1. Definitions -------------- 1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. 1.3. "Contribution" means Covered Software of a particular Contributor. 1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. "Incompatible With Secondary Licenses" means (a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or (b) that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. "Executable Form" means any form of the work other than Source Code Form. 1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. "License" means this document. 1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. "Modifications" means any of the following: (a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or (b) any new file in Source Code Form that contains any Covered Software. 1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. "Source Code Form" means the form of the work preferred for making modifications. 1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions -------------------------------- 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: (a) under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and (b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: (a) for any code that a Contributor has removed from Covered Software; or (b) for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or (c) under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities ------------------- 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: (a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and (b) You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation --------------------------------------------------- If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination -------------- 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ************************************************************************ * * * 6. Disclaimer of Warranty * * ------------------------- * * * * Covered Software is provided under this License on an "as is" * * basis, without warranty of any kind, either expressed, implied, or * * statutory, including, without limitation, warranties that the * * Covered Software is free of defects, merchantable, fit for a * * particular purpose or non-infringing. The entire risk as to the * * quality and performance of the Covered Software is with You. * * Should any Covered Software prove defective in any respect, You * * (not any Contributor) assume the cost of any necessary servicing, * * repair, or correction. This disclaimer of warranty constitutes an * * essential part of this License. No use of any Covered Software is * * authorized under this License except under this disclaimer. * * * ************************************************************************ ************************************************************************ * * * 7. Limitation of Liability * * -------------------------- * * * * Under no circumstances and under no legal theory, whether tort * * (including negligence), contract, or otherwise, shall any * * Contributor, or anyone who distributes Covered Software as * * permitted above, be liable to You for any direct, indirect, * * special, incidental, or consequential damages of any character * * including, without limitation, damages for lost profits, loss of * * goodwill, work stoppage, computer failure or malfunction, or any * * and all other commercial damages or losses, even if such party * * shall have been informed of the possibility of such damages. This * * limitation of liability shall not apply to liability for death or * * personal injury resulting from such party's negligence to the * * extent applicable law prohibits such limitation. Some * * jurisdictions do not allow the exclusion or limitation of * * incidental or consequential damages, so this exclusion and * * limitation may not apply to You. * * * ************************************************************************ 8. Litigation ------------- Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. 9. Miscellaneous ---------------- This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License --------------------------- 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice ------------------------------------------- This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - "Incompatible With Secondary Licenses" Notice --------------------------------------------------------- This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. concread-0.4.6/Makefile000064400000000000000000000000641046102023000130510ustar 00000000000000 check: cargo test cargo outdated -R cargo audit concread-0.4.6/README.md000064400000000000000000000106061046102023000126730ustar 00000000000000Concread ======== Concurrently readable datastructures for Rust. Concurrently readable is often referred to as Copy-On-Write, Multi-Version-Concurrency-Control. These structures allow multiple readers with transactions to proceed while single writers can operate. A reader is guaranteed the content will remain the same for the duration of the read, and readers do not block writers. Writers are serialised, just like a mutex. This library contains concurrently readable Cell types and Map/Cache types. When do I want to use these? ---------------------------- You can use these in place of a RwLock, and will likely see improvements in parallel throughput. The best use is in place of mutex/rwlock, where the reader exists for a non-trivial amount of time. For example, if you have a RwLock where the lock is taken, data changed or read, and dropped immediately, this probably won't help you. However, if you have a RwLock where you hold the read lock for any amount of time, writers will begin to stall - or inversely, the writer will cause readers to block and wait as the writer proceeds. Concurrently readable avoids this because readers never stall readers/writers, writers never stall or block a readers. This means that you gain in parallel throughput as stalls are reduced. This library also has a concurrently readable BTreeMap, HashMap and Adaptive Replacement Cache. These are best used when you have at least 512 bytes worth of data in your Cell, as they only copy what is required for an update. If you do not required key-ordering, then the HashMap will likely be the best choice for most applications. What is concurrently readable? ------------------------------ In a multithread application, data is commonly needed to be shared between threads. In sharing this there are multiple policies for this - Atomics for single integer reads, Mutexs for single thread access, RwLock for many readers or one writer, all the way to Lock Free which allows multiple read and writes of queues. Lock Free however has the limitation of being built on Atomics. This means it can really only update small amounts of data at a time consistently. It also means that you don't have transactional behaviours. While this is great for queues, it's not so good for a tree or hashmap where you want the state to be consistent from the state to the end of an operation. In the few places that lock free trees exist, they have the properly that as each thread is updating the tree, the changes are visibile immediately to all other readers. Your data could change before you know it. Mutexs and RwLock on the other hand allow much more complex structures to be protected. The guarantee that all readers see the same data, always, and that writers are the only writer. But they cause stalls on other threads waiting to access them. RwLock for example can see large delays if a reader won't yield, and OS policy can cause reader/writer to starve if the priority favours the other. Concurrently readable structures sit in between these two points. They provide multiple concurrent readers, with transactional behaviour, while allowing single writers to proceed simultaneously. This is achieved by having writers copy the internal data before they modify it. This allows readers to access old data, without modification, and allows the writer to change the data inplace before commiting. Once the new data is stored, old readers continue to access their old data - new readers will see the new data. This is a space-time trade off, using more memory to achieve better parallel behaviour. Safety ------ This library has extensive testing, and passes it's test suite under [miri], a rust undefined behaviour checker. If you find an issue however, please let us know so we can fix it! To check with miri OR asan on nightly: # Follow the miri readme setup steps cargo clean && MIRIFLAGS="-Zmiri-disable-isolation -Zmiri-disable-stacked-borrows" cargo miri test RUSTC_FLAGS="-Z sanitizer=address" cargo test Note: Miri requires isolation to be disabled so that clock monotonic can be used in ARC for cache channels. [miri]: https://github.com/rust-lang/miri SIMD ---- There is support for SIMD in ARC if you are using a nightly compiler. To use this, you need to compile with: RUSTFLAGS="-C target-feature=+avx2,+avx" cargo ... --features=concread/simd_support Contributing ------------ Please open an issue, pr or contact me directly by email (see github) concread-0.4.6/asan_test.sh000075500000000000000000000001511046102023000137260ustar 00000000000000#!/bin/sh #shellcheck disable=SC2068 RUSTC_BOOTSTRAP=1 RUSTFLAGS="-Z sanitizer=address" cargo test $@ concread-0.4.6/benches/arccache.rs000064400000000000000000000446311046102023000151270ustar 00000000000000use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput}; use function_name::named; use rand::distributions::uniform::SampleUniform; use rand::{thread_rng, Rng}; use std::collections::HashMap; use std::fmt::Debug; use std::hash::Hash; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use std::thread; use std::time::{Duration, Instant}; // use uuid::Uuid; use concread::arcache::{ARCache, ARCacheBuilder}; use concread::threadcache::ThreadLocal; use criterion::measurement::{Measurement, ValueFormatter}; pub static RUNNING: AtomicBool = AtomicBool::new(false); /* * A fixed dataset size, with various % of cache pressure (5, 10, 20, 40, 60, 80, 110) * ^ * \--- then vary the dataset sizes. Inclusions are always processed here. * -- vary the miss time/penalty as well? * * -- this measures time to complete. * * As above but could we measure hit rate as well? * */ #[derive(Debug)] struct DataPoint { elapsed: Duration, #[allow(dead_code)] csize: usize, hit_count: u32, attempt: u32, #[allow(dead_code)] hit_pct: f64, } #[derive(Clone)] enum AccessPattern where T: SampleUniform + PartialOrd + Clone, { Random(T, T), } impl AccessPattern where T: SampleUniform + PartialOrd + Clone, { fn next(&self) -> T { match self { AccessPattern::Random(min, max) => { let mut rng = thread_rng(); rng.gen_range(min.clone()..max.clone()) } } } } pub struct HitPercentageFormatter; impl ValueFormatter for HitPercentageFormatter { fn format_value(&self, value: f64) -> String { // eprintln!("⚠️ format_value -> {:?}", value); format!("{}%", value) } fn format_throughput(&self, _throughput: &Throughput, value: f64) -> String { // eprintln!("⚠️ format_throughput -> {:?}", value); format!("{}%", value) } fn scale_values(&self, _typical_value: f64, _values: &mut [f64]) -> &'static str { // eprintln!("⚠️ scale_values -> typ {:?} : {:?}", typical_value, values); // panic!(); "%" } fn scale_throughputs( &self, _typical_value: f64, _throughput: &Throughput, _values: &mut [f64], ) -> &'static str { // eprintln!("⚠️ scale_throughputs -> {:?}", values); "%" } fn scale_for_machines(&self, _values: &mut [f64]) -> &'static str { // eprintln!("⚠️ scale_machines -> {:?}", values); "%" } } pub struct HitPercentage; impl Measurement for HitPercentage { type Intermediate = (u32, u32); type Value = (u32, u32); fn start(&self) -> Self::Intermediate { unreachable!("HitPercentage requires the use of iter_custom"); } fn end(&self, i: Self::Intermediate) -> Self::Value { // eprintln!("⚠️ end -> {:?}", i); i } fn add(&self, v1: &Self::Value, v2: &Self::Value) -> Self::Value { // eprintln!("⚠️ add -> {:?} + {:?}", v1, v2); (v1.0 + v2.0, v1.1 + v2.1) } fn zero(&self) -> Self::Value { // eprintln!("⚠️ zero -> (0,0)"); (0, 0) } fn to_f64(&self, val: &Self::Value) -> f64 { let x = (f64::from(val.0) / f64::from(val.1)) * 100.0; // eprintln!("⚠️ to_f64 -> {:?} -> {:?}", val, x); x } fn formatter(&self) -> &dyn ValueFormatter { &HitPercentageFormatter } } fn tlocal_multi_thread_worker( mut cache: ThreadLocal, backing_set: Arc>, backing_set_delay: Option, access_pattern: AccessPattern, ) where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static + SampleUniform + PartialOrd, V: Clone + Debug + Sync + Send + 'static, { while RUNNING.load(Ordering::Relaxed) { let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); let mut rd_txn = cache.read(); // hit/miss process. if !rd_txn.contains_key(&k) { if let Some(delay) = backing_set_delay { thread::sleep(delay); } rd_txn.insert(k, v); } } } fn run_tlocal_multi_thread_test( // Number of iterations iters: u64, // Number of iters to warm the cache. warm_iters: u64, // pct of backing set size to configure into the cache. cache_size_pct: u64, // backing set. backing_set: HashMap, // backing set access delay on miss backing_set_delay: Option, // How to lookup keys during each iter. access_pattern: AccessPattern, // How many threads? thread_count: usize, ) -> DataPoint where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static + SampleUniform + PartialOrd, V: Clone + Debug + Sync + Send + 'static, { assert!(thread_count > 1); let mut csize = ((backing_set.len() / 100) * (cache_size_pct as usize)) / thread_count; if csize == 0 { csize = 1; } let mut tlocals = ThreadLocal::new(4, csize); let mut cache = tlocals.pop().expect("Can't get local cache."); let backing_set = Arc::new(backing_set); // Setup our sync RUNNING.store(true, Ordering::Relaxed); // Warm up let mut wr_txn = cache.write(); for _i in 0..warm_iters { let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); // hit/miss process. if !wr_txn.contains_key(&k) { wr_txn.insert(k, v); } } wr_txn.commit(); // Start some bg threads. let handles: Vec<_> = tlocals .into_iter() .map(|cache| { // Build the threads. let back_set = backing_set.clone(); let back_set_delay = backing_set_delay.clone(); let pat = access_pattern.clone(); thread::spawn(move || tlocal_multi_thread_worker(cache, back_set, back_set_delay, pat)) }) .collect(); // We do our measurement in this thread. let mut elapsed = Duration::from_secs(0); let mut hit_count = 0; let mut attempt = 0; for _i in 0..iters { attempt += 1; let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); let start = Instant::now(); let mut wr_txn = cache.write(); // hit/miss process. if wr_txn.contains_key(&k) { hit_count += 1; } else { if let Some(delay) = backing_set_delay { thread::sleep(delay); } wr_txn.insert(k, v); } wr_txn.commit(); elapsed = elapsed.checked_add(start.elapsed()).unwrap(); } // Stop our bg threads (how to signal?) RUNNING.store(false, Ordering::Relaxed); // Join them. handles .into_iter() .for_each(|th| th.join().expect("Can't join thread")); // Return our data. let hit_pct = (f64::from(hit_count as u32) / f64::from(iters as u32)) * 100.0; DataPoint { elapsed, csize, hit_count, attempt, hit_pct, } } fn multi_thread_worker( arc: Arc>, backing_set: Arc>, backing_set_delay: Option, access_pattern: AccessPattern, ) where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static + SampleUniform + PartialOrd, V: Clone + Debug + Sync + Send + 'static, { while RUNNING.load(Ordering::Relaxed) { let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); let mut rd_txn = arc.read(); // hit/miss process. if !rd_txn.contains_key(&k) { if let Some(delay) = backing_set_delay { thread::sleep(delay); } rd_txn.insert(k, v); } } } fn run_multi_thread_test( // Number of iterations iters: u64, // Number of iters to warm the cache. warm_iters: u64, // pct of backing set size to configure into the cache. cache_size_pct: u64, // backing set. backing_set: HashMap, // backing set access delay on miss backing_set_delay: Option, // How to lookup keys during each iter. access_pattern: AccessPattern, // How many threads? thread_count: usize, ) -> DataPoint where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static + SampleUniform + PartialOrd, V: Clone + Debug + Sync + Send + 'static, { assert!(thread_count > 1); let mut csize = (backing_set.len() / 100) * (cache_size_pct as usize); if csize == 0 { csize = 1; } let arc: Arc> = Arc::new( ARCacheBuilder::new() .set_size(csize, 0) .set_watermark(0) .set_reader_quiesce(false) .build() .unwrap(), ); let backing_set = Arc::new(backing_set); // Setup our sync RUNNING.store(true, Ordering::Relaxed); // Warm up let mut wr_txn = arc.write(); for _i in 0..warm_iters { let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); // hit/miss process. let cont = wr_txn.contains_key(&k); if !cont { wr_txn.insert(k, v); } } wr_txn.commit(); // Start some bg threads. let handles: Vec<_> = (0..(thread_count - 1)) .into_iter() .map(|_| { // Build the threads. let cache = arc.clone(); let back_set = backing_set.clone(); let back_set_delay = backing_set_delay.clone(); let pat = access_pattern.clone(); thread::spawn(move || multi_thread_worker(cache, back_set, back_set_delay, pat)) }) .collect(); // We do our measurement in this thread. let mut elapsed = Duration::from_secs(0); let mut hit_count = 0; let mut attempt = 0; for _i in 0..iters { attempt += 1; let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); let start = Instant::now(); let mut wr_txn = arc.write(); // eprintln!("lock took - {:?}", start.elapsed()); // hit/miss process. if wr_txn.contains_key(&k) { hit_count += 1; } else { if let Some(delay) = backing_set_delay { thread::sleep(delay); } wr_txn.insert(k, v); } wr_txn.commit(); elapsed = elapsed.checked_add(start.elapsed()).unwrap(); } // Stop our bg threads (how to signal?) RUNNING.store(false, Ordering::Relaxed); // Join them. handles .into_iter() .for_each(|th| th.join().expect("Can't join thread")); // Return our data. let hit_pct = (f64::from(hit_count as u32) / f64::from(iters as u32)) * 100.0; DataPoint { elapsed, csize, hit_count, attempt, hit_pct, } } fn run_single_thread_test( // Number of iterations iters: u64, // Number of iters to warm the cache. warm_iters: u64, // pct of backing set size to configure into the cache. cache_size_pct: u64, // backing set. backing_set: HashMap, // backing set access delay on miss backing_set_delay: Option, // How to lookup keys during each iter. access_pattern: AccessPattern, ) -> DataPoint where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static + SampleUniform + PartialOrd, V: Clone + Debug + Sync + Send + 'static, { let mut csize = (backing_set.len() / 100) * (cache_size_pct as usize); if csize == 0 { csize = 1; } let arc: ARCache = ARCacheBuilder::new() .set_size(csize, 0) .set_watermark(0) .set_reader_quiesce(false) .build() .unwrap(); let mut elapsed = Duration::from_secs(0); let mut hit_count = 0; let mut attempt = 0; let mut wr_txn = arc.write(); for _i in 0..warm_iters { let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); // hit/miss process. if !wr_txn.contains_key(&k) { wr_txn.insert(k, v); } } wr_txn.commit(); for _i in 0..iters { attempt += 1; let k = access_pattern.next(); let v = backing_set.get(&k).cloned().unwrap(); let start = Instant::now(); let mut wr_txn = arc.write(); // hit/miss process. let cont = wr_txn.contains_key(&k); if cont { hit_count += 1; } else { if let Some(delay) = backing_set_delay { thread::sleep(delay); } wr_txn.insert(k, v); } wr_txn.commit(); elapsed = elapsed.checked_add(start.elapsed()).unwrap(); } let hit_pct = (f64::from(hit_count as u32) / f64::from(iters as u32)) * 100.0; DataPoint { elapsed, csize, hit_count, attempt, hit_pct, } } macro_rules! tlocal_multi_thread_x_small_latency { ($c:expr, $max:expr, $measure:expr) => { let mut group = $c.benchmark_group(function_name!()); group.warm_up_time(Duration::from_secs(10)); group.measurement_time(Duration::from_secs(60)); for pct in &[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110] { group.bench_with_input(BenchmarkId::from_parameter(pct), &$max, |b, &max| { b.iter_custom(|iters| { let mut backing_set: HashMap = HashMap::with_capacity(max); (0..$max).for_each(|i| { backing_set.insert(i, i); }); let data = run_tlocal_multi_thread_test( iters, iters / 5, *pct, backing_set, Some(Duration::from_nanos(5)), AccessPattern::Random(0, max), 4, ); println!("{:?}", data); data.elapsed }) }); } group.finish(); }; } macro_rules! basic_multi_thread_x_small_latency { ($c:expr, $max:expr, $measure:expr) => { let mut group = $c.benchmark_group(function_name!()); group.warm_up_time(Duration::from_secs(10)); group.measurement_time(Duration::from_secs(60)); for pct in &[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110] { group.bench_with_input(BenchmarkId::from_parameter(pct), &$max, |b, &max| { b.iter_custom(|iters| { let mut backing_set: HashMap = HashMap::with_capacity(max); (0..$max).for_each(|i| { backing_set.insert(i, i); }); let data = run_multi_thread_test( iters, iters / 5, *pct, backing_set, Some(Duration::from_nanos(5)), AccessPattern::Random(0, max), 4, ); println!("{:?}", data); data.elapsed }) }); } group.finish(); }; } macro_rules! basic_single_thread_x_small_latency { ($c:expr, $max:expr, $measure:expr) => { let mut group = $c.benchmark_group(function_name!()); group.warm_up_time(Duration::from_secs(10)); for pct in &[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110] { group.bench_with_input(BenchmarkId::from_parameter(pct), &$max, |b, &max| { b.iter_custom(|iters| { let mut backing_set: HashMap = HashMap::with_capacity(max); (0..$max).for_each(|i| { backing_set.insert(i, i); }); let data = run_single_thread_test( iters, iters / 5, *pct, backing_set, Some(Duration::from_nanos(5)), AccessPattern::Random(0, max), ); println!("{:?}", data); data.elapsed }) }); } group.finish(); }; } macro_rules! basic_single_thread_x_small_pct { ($c:expr, $max:expr, $measure:expr) => { let mut group = $c.benchmark_group(function_name!()); group.warm_up_time(Duration::from_secs(10)); for pct in &[10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110] { group.bench_with_input(BenchmarkId::from_parameter(pct), &$max, |b, &max| { b.iter_custom(|iters| { let mut backing_set: HashMap = HashMap::with_capacity(max); (0..$max).for_each(|i| { backing_set.insert(i, i); }); let data = run_single_thread_test( iters, iters / 10, *pct, backing_set, Some(Duration::from_nanos(5)), AccessPattern::Random(0, max), ); println!("{:?}", data); (data.hit_count, data.attempt) }) }); } group.finish(); }; } #[named] pub fn tlocal_multi_thread_2048_small_latency(c: &mut Criterion) { tlocal_multi_thread_x_small_latency!(c, 2048, MeasureType::Latency); } #[named] pub fn basic_multi_thread_2048_small_latency(c: &mut Criterion) { basic_multi_thread_x_small_latency!(c, 2048, MeasureType::Latency); } #[named] pub fn basic_single_thread_2048_small_latency(c: &mut Criterion) { basic_single_thread_x_small_latency!(c, 2048, MeasureType::Latency); } #[named] pub fn basic_single_thread_2048_small_pct(c: &mut Criterion) { basic_single_thread_x_small_pct!(c, 2048, MeasureType::HitPct); } criterion_group!( name = latency; config = Criterion::default() // .measurement_time(Duration::from_secs(15)) .with_plots(); targets = basic_single_thread_2048_small_latency, basic_multi_thread_2048_small_latency, tlocal_multi_thread_2048_small_latency, ); criterion_group!( name = hit_percent; config = Criterion::default().with_measurement(HitPercentage); targets = basic_single_thread_2048_small_pct ); criterion_main!(latency, hit_percent); concread-0.4.6/benches/hashmap_benchmark.rs000064400000000000000000000233461046102023000170310ustar 00000000000000// The benchmarks aim to only measure times of the operations in their names. // That's why all use Bencher::iter_batched which enables non-benchmarked // preparation before running the measured function. // Insert (which doesn't completely avoid updates, but makes them unlikely), // remove and search have benchmarks with empty values and with custom structs // of 42 64-bit integers. // (as a sidenote, the performance really differs; in the case of remove, the // remove function itself returns original value - the benchmark doesn't use // this value, but performance is significantly worse - about twice on my // machine - than the empty value remove; it might be interesting to see if a // remove function returning void would perform better, ie. if the returns // are optimized - omitted in this case). // The counts of inserted/removed/searched elements are chosen at random from // constant ranges in an attempt to avoid a single count performing better // because of specific HW features of computers the code is benchmarked with. extern crate concread; extern crate criterion; extern crate rand; use concread::hashmap::*; use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion}; use rand::{thread_rng, Rng}; // ranges of counts for different benchmarks (MINs are inclusive, MAXes exclusive): const INSERT_COUNT_MIN: usize = 120; const INSERT_COUNT_MAX: usize = 140; const INSERT_COUNT_FOR_REMOVE_MIN: usize = 340; const INSERT_COUNT_FOR_REMOVE_MAX: usize = 360; const REMOVE_COUNT_MIN: usize = 120; const REMOVE_COUNT_MAX: usize = 140; const INSERT_COUNT_FOR_SEARCH_MIN: usize = 120; const INSERT_COUNT_FOR_SEARCH_MAX: usize = 140; const SEARCH_COUNT_MIN: usize = 120; const SEARCH_COUNT_MAX: usize = 140; // In the search benches, we randomly search for elements of a range of SEARCH_SIZE_NUMERATOR / SEARCH_SIZE_DENOMINATOR // times the number of elements contained. const SEARCH_SIZE_NUMERATOR: usize = 4; const SEARCH_SIZE_DENOMINATOR: usize = 3; pub fn insert_empty_value_rollback(c: &mut Criterion) { c.bench_function("insert_empty_value_rollback", |b| { b.iter_batched( || prepare_insert(()), |(mut map, list)| { insert_vec(&mut map, list); }, BatchSize::SmallInput, ) }); } pub fn insert_empty_value_commit(c: &mut Criterion) { c.bench_function("insert_empty_value_commit", |b| { b.iter_batched( || prepare_insert(()), |(mut map, list)| insert_vec(&mut map, list).commit(), BatchSize::SmallInput, ) }); } pub fn insert_struct_value_rollback(c: &mut Criterion) { c.bench_function("insert_struct_value_rollback", |b| { b.iter_batched( || prepare_insert(Struct::default()), |(mut map, list)| { insert_vec(&mut map, list); }, BatchSize::SmallInput, ) }); } pub fn insert_struct_value_commit(c: &mut Criterion) { c.bench_function("insert_struct_value_commit", |b| { b.iter_batched( || prepare_insert(Struct::default()), |(mut map, list)| insert_vec(&mut map, list).commit(), BatchSize::SmallInput, ) }); } pub fn remove_empty_value_rollback(c: &mut Criterion) { c.bench_function("remove_empty_value_rollback", |b| { b.iter_batched( || prepare_remove(()), |(ref mut map, ref list)| { remove_vec(map, list); }, BatchSize::SmallInput, ) }); } pub fn remove_empty_value_commit(c: &mut Criterion) { c.bench_function("remove_empty_value_commit", |b| { b.iter_batched( || prepare_remove(()), |(ref mut map, ref list)| remove_vec(map, list).commit(), BatchSize::SmallInput, ) }); } pub fn remove_struct_value_no_read_rollback(c: &mut Criterion) { c.bench_function("remove_struct_value_no_read_rollback", |b| { b.iter_batched( || prepare_remove(Struct::default()), |(ref mut map, ref list)| { remove_vec(map, list); }, BatchSize::SmallInput, ) }); } pub fn remove_struct_value_no_read_commit(c: &mut Criterion) { c.bench_function("remove_struct_value_no_read_commit", |b| { b.iter_batched( || prepare_remove(Struct::default()), |(ref mut map, ref list)| remove_vec(map, list).commit(), BatchSize::SmallInput, ) }); } pub fn search_empty_value(c: &mut Criterion) { c.bench_function("search_empty_value", |b| { b.iter_batched( || prepare_search(()), |(ref map, ref list)| search_vec(map, list), BatchSize::SmallInput, ) }); } pub fn search_struct_value(c: &mut Criterion) { c.bench_function("search_struct_value", |b| { b.iter_batched( || prepare_search(Struct::default()), |(ref map, ref list)| search_vec(map, list), BatchSize::SmallInput, ) }); } criterion_group!( insert, insert_empty_value_rollback, insert_empty_value_commit, insert_struct_value_rollback, insert_struct_value_commit ); criterion_group!( remove, remove_empty_value_rollback, remove_empty_value_commit, remove_struct_value_no_read_rollback, remove_struct_value_no_read_commit ); criterion_group!(search, search_empty_value, search_struct_value); criterion_main!(insert, remove, search); // Utility functions: fn insert_vec( map: &mut HashMap, list: Vec<(u32, V)>, ) -> HashMapWriteTxn { let mut write_txn = map.write(); for (key, val) in list.into_iter() { write_txn.insert(key, val); } write_txn } fn remove_vec<'a, V: Clone + Sync + Send + 'static>( map: &'a mut HashMap, list: &Vec, ) -> HashMapWriteTxn<'a, u32, V> { let mut write_txn = map.write(); for i in list.iter() { write_txn.remove(i); } write_txn } fn search_vec(map: &HashMap, list: &Vec) { let read_txn = map.read(); for i in list.iter() { read_txn.get(black_box(i)); } } #[derive(Default, Clone)] #[allow(dead_code)] struct Struct { var1: i64, var2: i64, var3: i64, var4: i64, var5: i64, var6: i64, var7: i64, var8: i64, var9: i64, var10: i64, var11: i64, var12: i64, var13: i64, var14: i64, var15: i64, var16: i64, var17: i64, var18: i64, var19: i64, var20: i64, var21: i64, var22: i64, var23: i64, var24: i64, var25: i64, var26: i64, var27: i64, var28: i64, var29: i64, var30: i64, var31: i64, var32: i64, var33: i64, var34: i64, var35: i64, var36: i64, var37: i64, var38: i64, var39: i64, var40: i64, var41: i64, var42: i64, } fn prepare_insert(value: V) -> (HashMap, Vec<(u32, V)>) { let mut rng = thread_rng(); let count = rng.gen_range(INSERT_COUNT_MIN..INSERT_COUNT_MAX); let mut list = Vec::with_capacity(count); for _ in 0..count { list.push(( rng.gen_range(0..INSERT_COUNT_MAX << 8) as u32, value.clone(), )); } (HashMap::new(), list) } /// Prepares a remove benchmark with values in the HashMap being clones of the 'value' parameter fn prepare_remove(value: V) -> (HashMap, Vec) { let mut rng = thread_rng(); let insert_count = rng.gen_range(INSERT_COUNT_FOR_REMOVE_MIN..INSERT_COUNT_FOR_REMOVE_MAX); let remove_count = rng.gen_range(REMOVE_COUNT_MIN..REMOVE_COUNT_MAX); let map = HashMap::new(); let mut write_txn = map.write(); for i in random_order(insert_count, insert_count).iter() { // We could count on the hash function alone to make the order random, but it seems // better to count on every possible implementation. write_txn.insert(*i, value.clone()); } write_txn.commit(); (map, random_order(insert_count, remove_count)) } fn prepare_search(value: V) -> (HashMap, Vec) { let mut rng = thread_rng(); let insert_count = rng.gen_range(INSERT_COUNT_FOR_SEARCH_MIN..INSERT_COUNT_FOR_SEARCH_MAX); let search_limit = insert_count * SEARCH_SIZE_NUMERATOR / SEARCH_SIZE_DENOMINATOR; let search_count = rng.gen_range(SEARCH_COUNT_MIN..SEARCH_COUNT_MAX); // Create a HashMap with elements 0 through insert_count(-1) let map = HashMap::new(); let mut write_txn = map.write(); for k in 0..insert_count { write_txn.insert(k as u32, value.clone()); } write_txn.commit(); // Choose 'search_count' numbers from [0,search_limit) randomly to be searched in the created map. let mut list = Vec::with_capacity(search_count); for _ in 0..search_count { list.push(rng.gen_range(0..search_limit as u32)); } (map, list) } /// Returns a Vec of n elements from the range [0,up_to) in random order without repetition fn random_order(up_to: usize, n: usize) -> Vec { let mut rng = thread_rng(); let mut order = Vec::with_capacity(n); let mut generated = vec![false; up_to]; let mut remaining = n; let mut remaining_elems = up_to; while remaining > 0 { let mut r = rng.gen_range(0..remaining_elems); // find the r-th yet nongenerated number: for i in 0..up_to { if generated[i] { continue; } if r == 0 { order.push(i as u32); generated[i] = true; break; } r -= 1; } remaining -= 1; remaining_elems -= 1; } order } concread-0.4.6/src/arcache/ll.rs000064400000000000000000000253071046102023000145520ustar 00000000000000use std::fmt::Debug; use std::marker::PhantomData; use std::mem::MaybeUninit; use std::ptr; pub trait LLWeight { fn ll_weight(&self) -> usize; } /* impl LLWeight for T { #[inline] default fn ll_weight(&self) -> usize { 1 } } */ #[derive(Clone, Debug)] pub(crate) struct LL where K: LLWeight + Clone + Debug, { head: *mut LLNode, tail: *mut LLNode, size: usize, // tag: usize, } #[derive(Debug)] pub(crate) struct LLNode where K: LLWeight + Clone + Debug, { pub(crate) k: MaybeUninit, next: *mut LLNode, prev: *mut LLNode, // tag: usize, } #[derive(Clone, Debug)] pub(crate) struct LLIterMut<'a, K> where K: LLWeight + Clone + Debug, { next: *mut LLNode, end: *mut LLNode, phantom: PhantomData<&'a K>, } impl LL where K: LLWeight + Clone + Debug, { pub(crate) fn new(// tag: usize ) -> Self { // assert!(tag > 0); let (head, tail) = LLNode::create_markers(); LL { head, tail, size: 0, // tag, } } #[allow(dead_code)] pub(crate) fn iter_mut(&self) -> LLIterMut { LLIterMut { next: unsafe { (*self.head).next }, end: self.tail, phantom: PhantomData, } } // Append a k to the set, and return it's pointer. pub(crate) fn append_k(&mut self, k: K) -> *mut LLNode { let n = LLNode::new(k); self.append_n(n); n } // Append an arbitrary node into this set. pub(crate) fn append_n(&mut self, n: *mut LLNode) { // Who is to the left of tail? unsafe { self.size += (*(*n).k.as_ptr()).ll_weight(); // must be untagged // assert!((*n).tag == 0); debug_assert!((*self.tail).next.is_null()); debug_assert!(!(*self.tail).prev.is_null()); let pred = (*self.tail).prev; debug_assert!(!pred.is_null()); debug_assert!((*pred).next == self.tail); (*n).prev = pred; (*n).next = self.tail; // (*n).tag = self.tag; (*pred).next = n; (*self.tail).prev = n; // We should have a prev and next debug_assert!(!(*n).prev.is_null()); debug_assert!(!(*n).next.is_null()); // And that prev's next is us, and next's prev is us. debug_assert!(!(*(*n).prev).next.is_null()); debug_assert!(!(*(*n).next).prev.is_null()); debug_assert!((*(*n).prev).next == n); debug_assert!((*(*n).next).prev == n); } } // Given a node ptr, extract and put it at the tail. IE hit. pub(crate) fn touch(&mut self, n: *mut LLNode) { debug_assert!(self.size > 0); if n == unsafe { (*self.tail).prev } { // Done, no-op } else { self.extract(n); self.append_n(n); } } // remove this node from the ll, and return it's ptr. pub(crate) fn pop(&mut self) -> *mut LLNode { let n = unsafe { (*self.head).next }; self.extract(n); debug_assert!(!n.is_null()); debug_assert!(n != self.head); debug_assert!(n != self.tail); n } // Cut a node out from this list from any location. pub(crate) fn extract(&mut self, n: *mut LLNode) { assert!(self.size > 0); assert!(!n.is_null()); unsafe { // We should have a prev and next debug_assert!(!(*n).prev.is_null()); debug_assert!(!(*n).next.is_null()); // And that prev's next is us, and next's prev is us. debug_assert!(!(*(*n).prev).next.is_null()); debug_assert!(!(*(*n).next).prev.is_null()); debug_assert!((*(*n).prev).next == n); debug_assert!((*(*n).next).prev == n); // And we belong to this set // assert!((*n).tag == self.tag); self.size -= (*(*n).k.as_ptr()).ll_weight(); } unsafe { let prev = (*n).prev; let next = (*n).next; // prev <-> n <-> next (*next).prev = prev; (*prev).next = next; // Null things for paranoia. if cfg!(test) || cfg!(debug_assertions) { (*n).prev = ptr::null_mut(); (*n).next = ptr::null_mut(); } // (*n).tag = 0; } } pub(crate) fn len(&self) -> usize { self.size } #[cfg(test)] pub(crate) fn peek_head(&self) -> Option<&K> { debug_assert!(!self.head.is_null()); let next = unsafe { (*self.head).next }; if next == self.tail { None } else { let l = unsafe { let ptr = (*next).k.as_ptr(); &(*ptr) as &K }; Some(l) } } #[cfg(test)] pub(crate) fn peek_tail(&self) -> Option<&K> { debug_assert!(!self.tail.is_null()); let prev = unsafe { (*self.tail).prev }; if prev == self.head { None } else { let l = unsafe { let ptr = (*prev).k.as_ptr(); &(*ptr) as &K }; Some(l) } } } impl Drop for LL where K: LLWeight + Clone + Debug, { fn drop(&mut self) { let head = self.head; let tail = self.tail; let mut n = unsafe { (*head).next }; while n != tail { let next = unsafe { (*n).next }; unsafe { ptr::drop_in_place((*n).k.as_mut_ptr()) }; LLNode::free(n); n = next; } LLNode::free(head); LLNode::free(tail); } } impl LLNode where K: LLWeight + Clone + Debug, { #[inline] pub(crate) fn create_markers() -> (*mut Self, *mut Self) { let head = Box::into_raw(Box::new(LLNode { k: MaybeUninit::uninit(), next: ptr::null_mut(), prev: ptr::null_mut(), // tag: 0, })); let tail = Box::into_raw(Box::new(LLNode { k: MaybeUninit::uninit(), next: ptr::null_mut(), prev: head, // tag: 0, })); unsafe { (*head).next = tail; } (head, tail) } #[inline] pub(crate) fn new( k: K, // tag: usize ) -> *mut Self { let b = Box::new(LLNode { k: MaybeUninit::new(k), next: ptr::null_mut(), prev: ptr::null_mut(), // tag, }); Box::into_raw(b) } #[inline] fn free(v: *mut Self) { debug_assert!(!v.is_null()); let _ = unsafe { Box::from_raw(v) }; } } impl AsRef for LLNode where K: LLWeight + Clone + Debug, { fn as_ref(&self) -> &K { unsafe { let ptr = self.k.as_ptr(); &(*ptr) as &K } } } impl AsMut for LLNode where K: LLWeight + Clone + Debug, { fn as_mut(&mut self) -> &mut K { unsafe { let ptr = self.k.as_mut_ptr(); &mut (*ptr) as &mut K } } } impl<'a, K> Iterator for LLIterMut<'a, K> where K: LLWeight + Clone + Debug, { type Item = &'a mut K; fn next(&mut self) -> Option { debug_assert!(!self.next.is_null()); if self.next == self.end { None } else { let r = Some(unsafe { (*self.next).as_mut() }); self.next = unsafe { (*self.next).next }; r } } } #[cfg(test)] mod tests { use super::{LLWeight, LL}; impl LLWeight for Box { #[inline] fn ll_weight(&self) -> usize { 1 } } #[test] fn test_cache_arc_ll_basic() { // We test with box so that we leak on error let mut ll: LL> = LL::new(); assert!(ll.len() == 0); // Allocate new nodes let n1 = ll.append_k(Box::new(1)); let n2 = ll.append_k(Box::new(2)); let n3 = ll.append_k(Box::new(3)); let n4 = ll.append_k(Box::new(4)); // Check that n1 is the head, n3 is tail. assert!(ll.len() == 4); assert!(ll.peek_head().unwrap().as_ref() == &1); assert!(ll.peek_tail().unwrap().as_ref() == &4); // Touch 2, it's now tail. ll.touch(n2); assert!(ll.len() == 4); assert!(ll.peek_head().unwrap().as_ref() == &1); assert!(ll.peek_tail().unwrap().as_ref() == &2); // Touch 1 (head), it's the tail now. ll.touch(n1); assert!(ll.len() == 4); assert!(ll.peek_head().unwrap().as_ref() == &3); assert!(ll.peek_tail().unwrap().as_ref() == &1); // Touch 1 (tail), it stays as tail. ll.touch(n1); assert!(ll.len() == 4); assert!(ll.peek_head().unwrap().as_ref() == &3); assert!(ll.peek_tail().unwrap().as_ref() == &1); // pop from head let _n3 = ll.pop(); assert!(ll.len() == 3); assert!(ll.peek_head().unwrap().as_ref() == &4); assert!(ll.peek_tail().unwrap().as_ref() == &1); // cut a node out from any (head, mid, tail) ll.extract(n2); assert!(ll.len() == 2); assert!(ll.peek_head().unwrap().as_ref() == &4); assert!(ll.peek_tail().unwrap().as_ref() == &1); ll.extract(n1); assert!(ll.len() == 1); assert!(ll.peek_head().unwrap().as_ref() == &4); assert!(ll.peek_tail().unwrap().as_ref() == &4); // test touch on ll of size 1 ll.touch(n4); assert!(ll.len() == 1); assert!(ll.peek_head().unwrap().as_ref() == &4); assert!(ll.peek_tail().unwrap().as_ref() == &4); // Remove last let _n4 = ll.pop(); assert!(ll.len() == 0); assert!(ll.peek_head().is_none()); assert!(ll.peek_tail().is_none()); // Add them all back so they are dropped. ll.append_n(n1); ll.append_n(n2); ll.append_n(n3); ll.append_n(n4); } #[derive(Clone, Debug)] struct Weighted { _i: u64, } impl LLWeight for Weighted { fn ll_weight(&self) -> usize { 8 } } #[test] fn test_cache_arc_ll_weighted() { // We test with box so that we leak on error let mut ll: LL = LL::new(); assert!(ll.len() == 0); let _n1 = ll.append_k(Weighted { _i: 1 }); assert!(ll.len() == 8); let _n2 = ll.append_k(Weighted { _i: 2 }); assert!(ll.len() == 16); let n1 = ll.pop(); assert!(ll.len() == 8); let n2 = ll.pop(); assert!(ll.len() == 0); // Add back so they drop ll.append_n(n1); ll.append_n(n2); } } concread-0.4.6/src/arcache/mod.rs000064400000000000000000003275561046102023000147350ustar 00000000000000//! ARCache - A concurrently readable adaptive replacement cache. //! //! An ARCache is used in place of a `RwLock` or `Mutex`. //! This structure is transactional, meaning that readers have guaranteed //! point-in-time views of the cache and their items, while allowing writers //! to proceed with inclusions and cache state management in parallel. //! //! This means that unlike a `RwLock` which can have many readers OR one writer //! this cache is capable of many readers, over multiple data generations AND //! writers that are serialised. This formally means that this is an ACID //! compliant Cache. mod ll; /// Stats collection for [ARCache] pub mod stats; use self::ll::{LLNode, LLWeight, LL}; use self::stats::{ARCacheReadStat, ARCacheWriteStat}; // use self::traits::ArcWeight; // use crate::cowcell::{CowCell, CowCellReadTxn}; use crate::hashtrie::*; use crossbeam_queue::ArrayQueue; use std::collections::HashMap as Map; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::Arc; use std::sync::{Mutex, RwLock}; use std::borrow::Borrow; use std::cell::UnsafeCell; use std::fmt::Debug; use std::hash::Hash; use std::mem; use std::num::NonZeroUsize; use std::ops::Deref; use std::ops::DerefMut; use std::time::Instant; // const READ_THREAD_MIN: usize = 8; const READ_THREAD_RATIO: usize = 16; enum ThreadCacheItem { Present(V, bool, usize), Removed(bool), } struct CacheHitEvent { t: Instant, k_hash: u64, } struct CacheIncludeEvent { t: Instant, k: K, v: V, txid: u64, size: usize, } #[derive(Hash, Ord, PartialOrd, Eq, PartialEq, Clone, Debug)] struct CacheItemInner where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, { k: K, txid: u64, size: usize, } impl LLWeight for CacheItemInner where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, { #[inline] fn ll_weight(&self) -> usize { self.size } } #[derive(Clone, Debug)] enum CacheItem where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, { Freq(*mut LLNode>, V), Rec(*mut LLNode>, V), GhostFreq(*mut LLNode>), GhostRec(*mut LLNode>), Haunted(*mut LLNode>), } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > Send for CacheItem { } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > Sync for CacheItem { } #[cfg(test)] #[derive(Clone, Debug, PartialEq)] pub(crate) enum CacheState { Freq, Rec, GhostFreq, GhostRec, Haunted, None, } #[cfg(test)] #[derive(Debug, PartialEq)] pub(crate) struct CStat { max: usize, cache: usize, tlocal: usize, freq: usize, rec: usize, ghost_freq: usize, ghost_rec: usize, haunted: usize, p: usize, } struct ArcInner where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { /// Weight of items between the two caches. p: usize, freq: LL>, rec: LL>, ghost_freq: LL>, ghost_rec: LL>, haunted: LL>, // // stat_rx: Receiver, // rx: Receiver>, hit_queue: Arc>, inc_queue: Arc>>, min_txid: u64, } struct ArcShared where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { // Max number of elements to cache. max: usize, // Max number of elements for a reader per thread. read_max: usize, // channels for readers. // stat_tx: Sender, hit_queue: Arc>, inc_queue: Arc>>, /// The number of items that are present in the cache before we start to process /// the arc sets/lists. watermark: usize, /// If readers should attempt to quiesce the cache. Default true reader_quiesce: bool, } /// A configurable builder to create new concurrent Adaptive Replacement Caches. pub struct ARCacheBuilder { // stats: Option, max: Option, read_max: Option, watermark: Option, reader_quiesce: bool, } /// A concurrently readable adaptive replacement cache. Operations are performed on the /// cache via read and write operations. pub struct ARCache where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { // Use a unified tree, allows simpler movement of items between the // cache types. cache: HashTrie>, // This is normally only ever taken in "read" mode, so it's effectively // an uncontended barrier. shared: RwLock>, // These are only taken during a quiesce inner: Mutex>, // stats: CowCell, above_watermark: AtomicBool, } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > Send for ARCache { } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > Sync for ARCache { } #[derive(Debug, Clone)] struct ReadCacheItem where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { k: K, v: V, size: usize, } impl LLWeight for ReadCacheItem where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { #[inline] fn ll_weight(&self) -> usize { self.size } } struct ReadCache where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { // cache of our missed items to send forward. // On drop we drain this to the channel set: Map>>, read_size: usize, tlru: LL>, } /// An active read transaction over the cache. The data is this cache is guaranteed to be /// valid at the point in time the read is created. You may include items during a cache /// miss via the "insert" function. pub struct ARCacheReadTxn<'a, K, V, S> where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheReadStat + Clone, { caller: &'a ARCache, // ro_txn to cache cache: HashTrieReadTxn<'a, K, CacheItem>, tlocal: Option>, // channel to send stat information // stat_tx: Sender, // tx channel to send forward events. // tx: Sender>, hit_queue: Arc>, inc_queue: Arc>>, above_watermark: bool, reader_quiesce: bool, stats: S, /* hits: u32, ops: u32, tlocal_hits: u32, tlocal_includes: u32, reader_includes: u32, reader_failed_includes: u32, */ } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheReadStat + Clone + Sync + Send + 'static, > Send for ARCacheReadTxn<'_, K, V, S> { } unsafe impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheReadStat + Clone + Sync + Send + 'static, > Sync for ARCacheReadTxn<'_, K, V, S> { } /// An active write transaction over the cache. The data in this cache is isolated /// from readers, and may be rolled-back if an error occurs. Changes only become /// globally visible once you call "commit". Items may be added to the cache on /// a miss via "insert", and you can explicitly remove items by calling "remove". pub struct ARCacheWriteTxn<'a, K, V, S> where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheWriteStat, { caller: &'a ARCache, // wr_txn to cache cache: HashTrieWriteTxn<'a, K, CacheItem>, // Cache of missed items (w_ dirty/clean) // On COMMIT we drain this to the main cache tlocal: Map>, hit: UnsafeCell>, clear: UnsafeCell, above_watermark: bool, // read_ops: UnsafeCell, stats: S, } impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > CacheItem { fn to_vref(&self) -> Option<&V> { match &self { CacheItem::Freq(_, v) | CacheItem::Rec(_, v) => Some(v), _ => None, } } fn to_kvsref(&self) -> Option<(&K, &V, usize)> { match &self { CacheItem::Freq(lln, v) | CacheItem::Rec(lln, v) => { let cii = unsafe { &*((**lln).k.as_ptr()) }; Some((&cii.k, v, cii.size)) } _ => None, } } #[cfg(test)] fn to_state(&self) -> CacheState { match &self { CacheItem::Freq(_, _v) => CacheState::Freq, CacheItem::Rec(_, _v) => CacheState::Rec, CacheItem::GhostFreq(_) => CacheState::GhostFreq, CacheItem::GhostRec(_) => CacheState::GhostRec, CacheItem::Haunted(_) => CacheState::Haunted, } } } macro_rules! drain_ll_to_ghost { ( $cache:expr, $ll:expr, $gf:expr, $gr:expr, $txid:expr, $stats:expr ) => {{ while $ll.len() > 0 { let n = $ll.pop(); debug_assert!(!n.is_null()); unsafe { // Set the item's evict txid. (*n).as_mut().txid = $txid; } let mut r = $cache.get_mut(unsafe { &(*n).as_mut().k }); match r { Some(ref mut ci) => { let mut next_state = match &ci { CacheItem::Freq(n, _) => { $gf.append_n(*n); $stats.evict_from_frequent(unsafe { &(**n).as_mut().k }); CacheItem::GhostFreq(*n) } CacheItem::Rec(n, _) => { $gr.append_n(*n); $stats.evict_from_recent(unsafe { &(**n).as_mut().k }); CacheItem::GhostRec(*n) } _ => { // Impossible state! unreachable!(); } }; // Now change the state. mem::swap(*ci, &mut next_state); } None => { // Impossible state! unreachable!(); } } } // end while }}; } macro_rules! evict_to_len { ( $cache:expr, $ll:expr, $to_ll:expr, $size:expr, $txid:expr, $stats:expr ) => {{ debug_assert!($ll.len() >= $size); while $ll.len() > $size { let n = $ll.pop(); debug_assert!(!n.is_null()); let mut r = $cache.get_mut(unsafe { &(*n).as_mut().k }); unsafe { // Set the item's evict txid. (*n).as_mut().txid = $txid; } match r { Some(ref mut ci) => { let mut next_state = match &ci { CacheItem::Freq(llp, _v) => { debug_assert!(*llp == n); // No need to extract, already popped! // $ll.extract(*llp); $to_ll.append_n(*llp); $stats.evict_from_frequent(unsafe { &(**llp).as_mut().k }); CacheItem::GhostFreq(*llp) } CacheItem::Rec(llp, _v) => { debug_assert!(*llp == n); // No need to extract, already popped! // $ll.extract(*llp); $to_ll.append_n(*llp); $stats.evict_from_recent(unsafe { &(**llp).as_mut().k }); CacheItem::GhostRec(*llp) } _ => { // Impossible state! unreachable!(); } }; // Now change the state. mem::swap(*ci, &mut next_state); } None => { // Impossible state! unreachable!(); } } } }}; } macro_rules! evict_to_haunted_len { ( $cache:expr, $ll:expr, $to_ll:expr, $size:expr, $txid:expr ) => {{ debug_assert!($ll.len() >= $size); while $ll.len() > $size { let n = $ll.pop(); debug_assert!(!n.is_null()); $to_ll.append_n(n); let mut r = $cache.get_mut(unsafe { &(*n).as_mut().k }); unsafe { // Set the item's evict txid. (*n).as_mut().txid = $txid; } match r { Some(ref mut ci) => { // Now change the state. let mut next_state = CacheItem::Haunted(n); mem::swap(*ci, &mut next_state); } None => { // Impossible state! unreachable!(); } }; } }}; } impl Default for ARCacheBuilder { fn default() -> Self { ARCacheBuilder { // stats: None, max: None, read_max: None, watermark: None, reader_quiesce: true, } } } impl ARCacheBuilder { /// Create a new ARCache builder that you can configure before creation. pub fn new() -> Self { Self::default() } /// Configure a new ARCache, that derives its size based on your expected workload. /// /// The values are total number of items you want to have in memory, the number /// of read threads you expect concurrently, the expected average number of cache /// misses per read operation, and the expected average number of writes or write /// cache misses per operation. The following formula is assumed: /// /// `max + (threads * (max/16))` /// ` + (threads * avg number of misses per op)` /// ` + avg number of writes per transaction` /// /// The cache may still exceed your provided total, and inaccurate tuning numbers /// will yield a situation where you may use too-little ram, or too much. This could /// be to your read misses exceeding your expected amount causing the queues to have /// more items in them at a time, or your writes are larger than expected. /// /// If you set ex_ro_miss to zero, no read thread local cache will be configured, but /// space will still be reserved for channel communication. #[must_use] pub fn set_expected_workload( self, total: usize, threads: usize, ex_ro_miss: usize, ex_rw_miss: usize, read_cache: bool, ) -> Self { let total = isize::try_from(total).unwrap(); let threads = isize::try_from(threads).unwrap(); let ro_miss = isize::try_from(ex_ro_miss).unwrap(); let wr_miss = isize::try_from(ex_rw_miss).unwrap(); let ratio = isize::try_from(READ_THREAD_RATIO).unwrap(); // I'd like to thank wolfram alpha ... for this magic. let max = -((ratio * ((ro_miss * threads) + wr_miss - total)) / (ratio + threads)); let read_max = if read_cache { max / ratio } else { 0 }; let max = usize::try_from(max).unwrap(); let read_max = usize::try_from(read_max).unwrap(); ARCacheBuilder { // stats: self.stats, max: Some(max), read_max: Some(read_max), watermark: self.watermark, reader_quiesce: self.reader_quiesce, } } /// Configure a new ARCache, with a capacity of `max` main cache items and `read_max` /// Note that due to the way the cache operates, the number of items can and /// will exceed `max` on a regular basis, so you should consider using `set_expected_workload` /// and specifying your expected workload parameters to have a better derived /// cache size. #[must_use] pub fn set_size(self, max: usize, read_max: usize) -> Self { ARCacheBuilder { // stats: self.stats, max: Some(max), read_max: Some(read_max), watermark: self.watermark, reader_quiesce: self.reader_quiesce, } } // TODO: new_size is deprecated and has no information to refer to? /// See [ARCache::new_size] for more information. This allows manual configuration of the data /// tracking watermark. To disable this, set to 0. If watermark is greater than /// max, it will be clamped to max. #[must_use] pub fn set_watermark(self, watermark: usize) -> Self { ARCacheBuilder { // stats: self.stats, max: self.max, read_max: self.read_max, watermark: Some(watermark), reader_quiesce: self.reader_quiesce, } } /* /// Import read/write hit stats from a previous execution of this cache. #[must_use] pub fn set_stats(self, stats: CacheStats) -> Self { ARCacheBuilder { stats: Some(stats), max: self.max, read_max: self.read_max, watermark: self.watermark, reader_quiesce: self.reader_quiesce, } } */ /// Enable or Disable reader cache quiescing. In some cases this can improve /// reader performance, at the expense that cache includes or hits may be delayed /// before acknowledgement. You must MANUALLY run periodic quiesces if you mark /// this as "false" to disable reader quiescing. #[must_use] pub fn set_reader_quiesce(self, reader_quiesce: bool) -> Self { ARCacheBuilder { // stats: self.stats, max: self.max, read_max: self.read_max, watermark: self.watermark, reader_quiesce, } } /// Consume this builder, returning a cache if successful. If configured parameters are /// missing or incorrect, a None will be returned. pub fn build(self) -> Option> where K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, { let ARCacheBuilder { // stats, max, read_max, watermark, reader_quiesce, } = self; let (max, read_max) = max.zip(read_max)?; /* let stats = stats .map(|mut stats| { // Reset some values to 0 because else it doesn't really max sense ... stats.shared_max = 0; stats.freq = 0; stats.recent = 0; stats.all_seen_keys = 0; stats.p_weight = 0; // Ensure that p isn't too large. Could happen if the cache size was reduced from // a previous stats invocation. // // NOTE: I decided not to port this over, because it may cause issues in early cache start // up when the lists are empty. // stats.p_weight = stats.p_weight.clamp(0, max); stats }) .unwrap_or_default(); */ let watermark = watermark.unwrap_or(if max < 128 { 0 } else { (max / 20) * 18 }); let watermark = watermark.clamp(0, max); // If the watermark is 0, always track from the start. let init_watermark = watermark == 0; // let (stat_tx, stat_rx) = unbounded(); // The hit queue is reasonably cheap, so we can let this grow a bit. /* let chan_size = max / 20; let chan_size = if chan_size < 16 { 16 } else { chan_size }; let chan_size = chan_size.clamp(0, 128); */ let chan_size = 64; let hit_queue = Arc::new(ArrayQueue::new(chan_size)); // this can oversize and take a lot of time to drain and manage, so we keep this bounded. // let chan_size = chan_size.clamp(0, 64); let chan_size = 32; let inc_queue = Arc::new(ArrayQueue::new(chan_size)); let shared = RwLock::new(ArcShared { max, read_max, // stat_tx, hit_queue: hit_queue.clone(), inc_queue: inc_queue.clone(), watermark, reader_quiesce, }); let inner = Mutex::new(ArcInner { // We use p from the former stats. p: 0, freq: LL::new(), rec: LL::new(), ghost_freq: LL::new(), ghost_rec: LL::new(), haunted: LL::new(), // stat_rx, hit_queue, inc_queue, min_txid: 0, }); Some(ARCache { cache: HashTrie::new(), shared, inner, // stats: CowCell::new(stats), above_watermark: AtomicBool::new(init_watermark), }) } } impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, > ARCache { /// Use ARCacheBuilder instead #[deprecated(since = "0.2.20", note = "please use`ARCacheBuilder` instead")] pub fn new( total: usize, threads: usize, ex_ro_miss: usize, ex_rw_miss: usize, read_cache: bool, ) -> Self { ARCacheBuilder::default() .set_expected_workload(total, threads, ex_ro_miss, ex_rw_miss, read_cache) .build() .expect("Invalid cache parameters!") } /// Use ARCacheBuilder instead #[deprecated(since = "0.2.20", note = "please use`ARCacheBuilder` instead")] pub fn new_size(max: usize, read_max: usize) -> Self { ARCacheBuilder::default() .set_size(max, read_max) .build() .expect("Invalid cache parameters!") } /// Use ARCacheBuilder instead #[deprecated(since = "0.2.20", note = "please use`ARCacheBuilder` instead")] pub fn new_size_watermark(max: usize, read_max: usize, watermark: usize) -> Self { ARCacheBuilder::default() .set_size(max, read_max) .set_watermark(watermark) .build() .expect("Invalid cache parameters!") } /// Begin a read operation on the cache. This reader has a thread-local cache for items /// that are localled included via `insert`, and can communicate back to the main cache /// to safely include items. pub fn read_stats(&self, stats: S) -> ARCacheReadTxn where S: ARCacheReadStat + Clone, { let rshared = self.shared.read().unwrap(); let tlocal = if rshared.read_max > 0 { Some(ReadCache { set: Map::new(), read_size: rshared.read_max, tlru: LL::new(), }) } else { None }; let above_watermark = self.above_watermark.load(Ordering::Relaxed); ARCacheReadTxn { caller: self, cache: self.cache.read(), tlocal, // stat_tx: rshared.stat_tx.clone(), hit_queue: rshared.hit_queue.clone(), inc_queue: rshared.inc_queue.clone(), above_watermark, reader_quiesce: rshared.reader_quiesce, stats, } } /// Begin a read operation on the cache. This reader has a thread-local cache for items /// that are localled included via `insert`, and can communicate back to the main cache /// to safely include items. pub fn read(&self) -> ARCacheReadTxn { self.read_stats(()) } /// Begin a write operation on the cache. This writer has a thread-local store /// for all items that have been included or dirtied in the transactions, items /// may be removed from this cache (ie deleted, invalidated). pub fn write(&self) -> ARCacheWriteTxn { self.write_stats(()) } /// _ pub fn write_stats(&self, stats: S) -> ARCacheWriteTxn where S: ARCacheWriteStat, { let above_watermark = self.above_watermark.load(Ordering::Relaxed); ARCacheWriteTxn { caller: self, cache: self.cache.write(), tlocal: Map::new(), hit: UnsafeCell::new(Vec::new()), clear: UnsafeCell::new(false), above_watermark, // read_ops: UnsafeCell::new(0), stats, } } /* /// View the statistics for this cache. These values are a snapshot of a point in /// time and may not be accurate at "this exact moment". You *may* pin this value /// for as long as you wish without causing excess memory usage or transaction /// cleanup issues. pub fn view_stats(&self) -> CowCellReadTxn { self.stats.read() } */ /* /// Reset access stats for this cache. This will reset all the hit and include /// fields, but values related to current cache state are not altered. This can /// be useful after a bulk import to prevent polution of the stats from that /// operation. pub fn reset_stats(&self) { let mut stat_guard = self.stats.write(); stat_guard.reader_hits = 0; stat_guard.reader_tlocal_hits = 0; stat_guard.reader_tlocal_includes = 0; stat_guard.reader_includes = 0; stat_guard.write_hits = 0; stat_guard.write_includes = 0; stat_guard.write_modifies = 0; stat_guard.freq_evicts = 0; stat_guard.recent_evicts = 0; stat_guard.commit(); } */ /* fn try_write(&self) -> Option> { self.try_write_stats(()).ok() } */ fn try_write_stats(&self, stats: S) -> Result, S> where S: ARCacheWriteStat, { match self.cache.try_write() { Some(cache) => { let above_watermark = self.above_watermark.load(Ordering::Relaxed); Ok(ARCacheWriteTxn { caller: self, cache, tlocal: Map::new(), hit: UnsafeCell::new(Vec::new()), clear: UnsafeCell::new(false), above_watermark, // read_ops: UnsafeCell::new(0), stats, }) } None => Err(stats), } } /// If the lock is available, attempt to quiesce the cache's async channel states. If the lock /// is currently held, no action is taken. pub fn try_quiesce_stats(&self, stats: S) -> S where S: ARCacheWriteStat, { // It seems like a good idea to skip this when not at wmark, but // that can cause low-pressure caches to no submit includes properly. // if self.above_watermark.load(Ordering::Relaxed) { match self.try_write_stats(stats) { Ok(wr_txn) => wr_txn.commit(), Err(stats) => stats, } } /// If the lock is available, attempt to quiesce the cache's async channel states. If the lock /// is currently held, no action is taken. pub fn try_quiesce(&self) { self.try_quiesce_stats(()) } fn calc_p_freq(ghost_rec_len: usize, ghost_freq_len: usize, p: &mut usize, size: usize) { let delta = if ghost_rec_len > ghost_freq_len { ghost_rec_len / ghost_freq_len } else { 1 } * size; let p_was = *p; if delta < *p { *p -= delta } else { *p = 0 } tracing::trace!("f {} >>> {}", p_was, *p); } fn calc_p_rec( cap: usize, ghost_rec_len: usize, ghost_freq_len: usize, p: &mut usize, size: usize, ) { let delta = if ghost_freq_len > ghost_rec_len { ghost_freq_len / ghost_rec_len } else { 1 } * size; let p_was = *p; if delta <= cap - *p { *p += delta } else { *p = cap } tracing::trace!("r {} >>> {}", p_was, *p); } fn drain_tlocal_inc( &self, cache: &mut HashTrieWriteTxn>, inner: &mut ArcInner, shared: &ArcShared, tlocal: Map>, commit_txid: u64, stats: &mut S, ) where S: ARCacheWriteStat, { // drain tlocal into the main cache. tlocal.into_iter().for_each(|(k, tcio)| { let r = cache.get_mut(&k); match (r, tcio) { (None, ThreadCacheItem::Present(tci, clean, size)) => { assert!(clean); let llp = inner.rec.append_k(CacheItemInner { k: k.clone(), txid: commit_txid, size, }); // stats.write_includes += 1; stats.include(&k); cache.insert(k, CacheItem::Rec(llp, tci)); } (None, ThreadCacheItem::Removed(clean)) => { assert!(clean); // Mark this as haunted let llp = inner.haunted.append_k(CacheItemInner { k: k.clone(), txid: commit_txid, size: 1, }); cache.insert(k, CacheItem::Haunted(llp)); } (Some(ref mut ci), ThreadCacheItem::Removed(clean)) => { assert!(clean); // From whatever set we were in, pop and move to haunted. let mut next_state = match ci { CacheItem::Freq(llp, _v) => { // println!("tlocal {:?} Freq -> Freq", k); inner.freq.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; inner.haunted.append_n(*llp); CacheItem::Haunted(*llp) } CacheItem::Rec(llp, _v) => { // println!("tlocal {:?} Rec -> Freq", k); // Remove the node and put it into freq. inner.rec.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; inner.haunted.append_n(*llp); CacheItem::Haunted(*llp) } CacheItem::GhostFreq(llp) => { // println!("tlocal {:?} GhostFreq -> Freq", k); inner.ghost_freq.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; inner.haunted.append_n(*llp); CacheItem::Haunted(*llp) } CacheItem::GhostRec(llp) => { // println!("tlocal {:?} GhostRec -> Rec", k); inner.ghost_rec.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; inner.haunted.append_n(*llp); CacheItem::Haunted(*llp) } CacheItem::Haunted(llp) => { // println!("tlocal {:?} Haunted -> Rec", k); unsafe { (**llp).as_mut().txid = commit_txid }; CacheItem::Haunted(*llp) } }; // Now change the state. mem::swap(*ci, &mut next_state); } // Done! https://github.com/rust-lang/rust/issues/68354 will stabilise // in 1.44 so we can prevent a need for a clone. (Some(ref mut ci), ThreadCacheItem::Present(tci, clean, size)) => { assert!(clean); // * as we include each item, what state was it in before? // It's in the cache - what action must we take? let mut next_state = match ci { CacheItem::Freq(llp, _v) => { // println!("tlocal {:?} Freq -> Freq", k); inner.freq.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; unsafe { (**llp).as_mut().size = size }; // Move the list item to it's head. inner.freq.append_n(*llp); stats.modify(unsafe { &(**llp).as_mut().k }); // Update v. CacheItem::Freq(*llp, tci) } CacheItem::Rec(llp, _v) => { // println!("tlocal {:?} Rec -> Freq", k); // Remove the node and put it into freq. inner.rec.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; unsafe { (**llp).as_mut().size = size }; inner.freq.append_n(*llp); stats.modify(unsafe { &(**llp).as_mut().k }); CacheItem::Freq(*llp, tci) } CacheItem::GhostFreq(llp) => { // println!("tlocal {:?} GhostFreq -> Freq", k); // Ajdust p Self::calc_p_freq( inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_freq.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; unsafe { (**llp).as_mut().size = size }; inner.freq.append_n(*llp); stats.ghost_frequent_revive(unsafe { &(**llp).as_mut().k }); CacheItem::Freq(*llp, tci) } CacheItem::GhostRec(llp) => { // println!("tlocal {:?} GhostRec -> Rec", k); // Ajdust p Self::calc_p_rec( shared.max, inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_rec.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; unsafe { (**llp).as_mut().size = size }; inner.rec.append_n(*llp); stats.ghost_recent_revive(unsafe { &(**llp).as_mut().k }); CacheItem::Rec(*llp, tci) } CacheItem::Haunted(llp) => { // stats.write_includes += 1; // println!("tlocal {:?} Haunted -> Rec", k); inner.haunted.extract(*llp); unsafe { (**llp).as_mut().txid = commit_txid }; unsafe { (**llp).as_mut().size = size }; inner.rec.append_n(*llp); stats.include_haunted(unsafe { &(**llp).as_mut().k }); CacheItem::Rec(*llp, tci) } }; // Now change the state. mem::swap(*ci, &mut next_state); } } }); } fn drain_hit_rx( &self, cache: &mut HashTrieWriteTxn>, inner: &mut ArcInner, commit_ts: Instant, ) { // * for each item // while let Ok(ce) = inner.rx.try_recv() { // TODO: Find a way to remove these clones here! while let Some(ce) = inner.hit_queue.pop() { let CacheHitEvent { t, k_hash } = ce; if let Some(ref mut ci_slots) = unsafe { cache.get_slot_mut(k_hash) } { for ref mut ci in ci_slots.iter_mut() { let mut next_state = match &ci.v { CacheItem::Freq(llp, v) => { // println!("rxhit {:?} Freq -> Freq", k); inner.freq.touch(*llp); CacheItem::Freq(*llp, v.clone()) } CacheItem::Rec(llp, v) => { // println!("rxhit {:?} Rec -> Freq", k); inner.rec.extract(*llp); inner.freq.append_n(*llp); CacheItem::Freq(*llp, v.clone()) } // While we can't add this from nothing, we can // at least keep it in the ghost sets. CacheItem::GhostFreq(llp) => { // println!("rxhit {:?} GhostFreq -> GhostFreq", k); inner.ghost_freq.touch(*llp); CacheItem::GhostFreq(*llp) } CacheItem::GhostRec(llp) => { // println!("rxhit {:?} GhostRec -> GhostRec", k); inner.ghost_rec.touch(*llp); CacheItem::GhostRec(*llp) } CacheItem::Haunted(llp) => { // println!("rxhit {:?} Haunted -> Haunted", k); // We can't do anything about this ... CacheItem::Haunted(*llp) } }; mem::swap(&mut ci.v, &mut next_state); } // for each item in the bucket. } // Do nothing, it must have been evicted. // Stop processing the queue, we are up to "now". if t >= commit_ts { break; } } } fn drain_inc_rx( &self, cache: &mut HashTrieWriteTxn>, inner: &mut ArcInner, shared: &ArcShared, commit_ts: Instant, stats: &mut S, ) where S: ARCacheWriteStat, { while let Some(ce) = inner.inc_queue.pop() { // Update if it was inc let CacheIncludeEvent { t, k, v: iv, txid, size, } = ce; let mut r = cache.get_mut(&k); match r { Some(ref mut ci) => { let mut next_state = match &ci { CacheItem::Freq(llp, _v) => { if unsafe { (**llp).as_ref().txid >= txid } || inner.min_txid > txid { // println!("rxinc {:?} Freq -> Freq (touch only)", k); // Our cache already has a newer value, keep it. inner.freq.touch(*llp); None } else { // println!("rxinc {:?} Freq -> Freq (update)", k); // The value is newer, update. inner.freq.extract(*llp); unsafe { (**llp).as_mut().txid = txid }; unsafe { (**llp).as_mut().size = size }; inner.freq.append_n(*llp); stats.modify(unsafe { &(**llp).as_mut().k }); Some(CacheItem::Freq(*llp, iv)) } } CacheItem::Rec(llp, v) => { inner.rec.extract(*llp); if unsafe { (**llp).as_ref().txid >= txid } || inner.min_txid > txid { // println!("rxinc {:?} Rec -> Freq (touch only)", k); inner.freq.append_n(*llp); Some(CacheItem::Freq(*llp, v.clone())) } else { // println!("rxinc {:?} Rec -> Freq (update)", k); unsafe { (**llp).as_mut().txid = txid }; unsafe { (**llp).as_mut().size = size }; inner.freq.append_n(*llp); stats.modify(unsafe { &(**llp).as_mut().k }); Some(CacheItem::Freq(*llp, iv)) } } CacheItem::GhostFreq(llp) => { // Adjust p if unsafe { (**llp).as_ref().txid > txid } || inner.min_txid > txid { // println!("rxinc {:?} GhostFreq -> GhostFreq", k); // The cache version is newer, this is just a hit. let size = unsafe { (**llp).as_mut().size }; Self::calc_p_freq( inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_freq.touch(*llp); None } else { // This item is newer, so we can include it. // println!("rxinc {:?} GhostFreq -> Rec", k); Self::calc_p_freq( inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_freq.extract(*llp); unsafe { (**llp).as_mut().txid = txid }; unsafe { (**llp).as_mut().size = size }; inner.freq.append_n(*llp); stats.ghost_frequent_revive(unsafe { &(**llp).as_mut().k }); Some(CacheItem::Freq(*llp, iv)) } } CacheItem::GhostRec(llp) => { // Adjust p if unsafe { (**llp).as_ref().txid > txid } || inner.min_txid > txid { // println!("rxinc {:?} GhostRec -> GhostRec", k); let size = unsafe { (**llp).as_mut().size }; Self::calc_p_rec( shared.max, inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_rec.touch(*llp); None } else { // println!("rxinc {:?} GhostRec -> Rec", k); Self::calc_p_rec( shared.max, inner.ghost_rec.len(), inner.ghost_freq.len(), &mut inner.p, size, ); inner.ghost_rec.extract(*llp); unsafe { (**llp).as_mut().txid = txid }; unsafe { (**llp).as_mut().size = size }; inner.rec.append_n(*llp); stats.ghost_recent_revive(unsafe { &(**llp).as_mut().k }); Some(CacheItem::Rec(*llp, iv)) } } CacheItem::Haunted(llp) => { if unsafe { (**llp).as_ref().txid > txid } || inner.min_txid > txid { // println!("rxinc {:?} Haunted -> Haunted", k); None } else { // println!("rxinc {:?} Haunted -> Rec", k); inner.haunted.extract(*llp); unsafe { (**llp).as_mut().txid = txid }; unsafe { (**llp).as_mut().size = size }; inner.rec.append_n(*llp); stats.include_haunted(unsafe { &(**llp).as_mut().k }); Some(CacheItem::Rec(*llp, iv)) } } }; if let Some(ref mut next_state) = next_state { mem::swap(*ci, next_state); } } None => { // It's not present - include it! // println!("rxinc {:?} None -> Rec", k); if txid >= inner.min_txid { let llp = inner.rec.append_k(CacheItemInner { k: k.clone(), txid, size, }); stats.include(&k); cache.insert(k, CacheItem::Rec(llp, iv)); } } }; // Stop processing the queue, we are up to "now". if t >= commit_ts { break; } } } fn drain_tlocal_hits( &self, cache: &mut HashTrieWriteTxn>, inner: &mut ArcInner, // shared: &ArcShared, commit_txid: u64, hit: Vec, ) { // Stats updated by caller hit.into_iter().for_each(|k_hash| { // * everything hit must be in main cache now, so bring these // all to the relevant item heads. // * Why do this last? Because the write is the "latest" we want all the fresh // written items in the cache over the "read" hits, it gives us some aprox // of time ordering, but not perfect. // Find the item in the cache. // * based on it's type, promote it in the correct list, or move it. // How does this prevent incorrect promotion from rec to freq? txid? // println!("Checking Hit ... {:?}", k); let mut r = unsafe { cache.get_slot_mut(k_hash) }; match r { Some(ref mut ci_slots) => { for ref mut ci in ci_slots.iter_mut() { // This differs from above - we skip if we don't touch anything // that was added in this txn. This is to prevent double touching // anything that was included in a write. // TODO: find a way to remove these clones let mut next_state = match &ci.v { CacheItem::Freq(llp, v) => { if unsafe { (**llp).as_ref().txid != commit_txid } { // println!("hit {:?} Freq -> Freq", k); inner.freq.touch(*llp); Some(CacheItem::Freq(*llp, v.clone())) } else { None } } CacheItem::Rec(llp, v) => { if unsafe { (**llp).as_ref().txid != commit_txid } { // println!("hit {:?} Rec -> Freq", k); inner.rec.extract(*llp); inner.freq.append_n(*llp); Some(CacheItem::Freq(*llp, v.clone())) } else { None } } _ => { // Ignore hits on items that may have been cleared. None } }; // Now change the state. if let Some(ref mut next_state) = next_state { mem::swap(&mut ci.v, next_state); } } // for each ci in slots } None => { // Impossible state! unreachable!(); } } }); } /* fn drain_stat_rx( &self, inner: &mut ArcInner, stats: &mut CacheStats, commit_ts: Instant, ) { while let Ok(ce) = inner.stat_rx.try_recv() { let ReaderStatEvent { t, ops, hits, tlocal_hits, tlocal_includes, reader_includes, reader_failed_includes, } = ce; stats.reader_ops += ops as u64; stats.reader_tlocal_hits += tlocal_hits as u64; stats.reader_tlocal_includes += tlocal_includes as u64; stats.reader_hits += hits as u64; stats.reader_includes += reader_includes as u64; stats.reader_failed_includes += reader_failed_includes as u64; // Stop processing the queue, we are up to "now". if t >= commit_ts { break; } } } */ #[allow(clippy::cognitive_complexity)] fn evict( &self, cache: &mut HashTrieWriteTxn>, inner: &mut ArcInner, shared: &ArcShared, commit_txid: u64, stats: &mut S, ) where S: ARCacheWriteStat, { debug_assert!(inner.p <= shared.max); // Convince the compiler copying is okay. let p = inner.p; if inner.rec.len() + inner.freq.len() > shared.max { // println!("Checking cache evict"); /* println!( "from -> rec {:?}, freq {:?}", inner.rec.len(), inner.freq.len() ); */ let delta = (inner.rec.len() + inner.freq.len()) - shared.max; // We have overflowed by delta. As we are not "evicting as we go" we have to work out // what we should have evicted up to now. // // keep removing from rec until == p OR delta == 0, and if delta remains, then remove from freq. let rec_to_len = if inner.p == 0 { // println!("p == 0 => {:?}", inner.rec.len()); debug_assert!(delta <= inner.rec.len()); // We are fully weight to freq, so only remove in rec. inner.rec.len() - delta } else if inner.rec.len() > inner.p { // There is a partial weighting, how much do we need to move? let rec_delta = inner.rec.len() - inner.p; if rec_delta > delta { /* println!( "p ({:?}) <= rec ({:?}), rec_delta ({:?}) > delta ({:?})", inner.p, inner.rec.len(), rec_delta, delta ); */ // We will have removed enough through delta alone in rec. inner.rec.len() - delta } else { /* println!( "p ({:?}) <= rec ({:?}), rec_delta ({:?}) <= delta ({:?})", inner.p, inner.rec.len(), rec_delta, delta ); */ // Remove the full delta, and excess will be removed from freq. inner.rec.len() - rec_delta } } else { // rec is already below p, therefore we must need to remove in freq, and // we need to consider how much is in rec. // println!("p ({:?}) > rec ({:?})", inner.p, inner.rec.len()); inner.rec.len() }; // Now we can get the expected sizes; debug_assert!(shared.max >= rec_to_len); let freq_to_len = shared.max - rec_to_len; // println!("move to -> rec {:?}, freq {:?}", rec_to_len, freq_to_len); debug_assert!(freq_to_len + rec_to_len <= shared.max); // stats.freq_evicts += (inner.freq.len() - freq_to_len) as u64; // stats.recent_evicts += (inner.rec.len() - rec_to_len) as u64; // stats.frequent_evict_add((inner.freq.len() - freq_to_len) as u64); // stats.recent_evict_add((inner.rec.len() - rec_to_len) as u64); evict_to_len!( cache, inner.rec, &mut inner.ghost_rec, rec_to_len, commit_txid, stats ); evict_to_len!( cache, inner.freq, &mut inner.ghost_freq, freq_to_len, commit_txid, stats ); // Finally, do an evict of the ghost sets if they are too long - these are weighted // inverse to the above sets. Note the freq to len in ghost rec, and rec to len in // ghost freq! if inner.ghost_rec.len() > (shared.max - p) { evict_to_haunted_len!( cache, inner.ghost_rec, &mut inner.haunted, freq_to_len, commit_txid ); } if inner.ghost_freq.len() > p { evict_to_haunted_len!( cache, inner.ghost_freq, &mut inner.haunted, rec_to_len, commit_txid ); } } } #[allow(clippy::unnecessary_mut_passed)] fn commit( &self, mut cache: HashTrieWriteTxn>, tlocal: Map>, hit: Vec, clear: bool, init_above_watermark: bool, // read_ops: u32, mut stats: S, ) -> S where S: ARCacheWriteStat, { // What is the time? let commit_ts = Instant::now(); let commit_txid = cache.get_txid(); // Copy p + init cache sizes for adjustment. let mut inner = self.inner.lock().unwrap(); let shared = self.shared.read().unwrap(); // Did we request to be cleared? If so, we move everything to a ghost set // that was live. // // we also set the min_txid watermark which prevents any inclusion of // any item that existed before this point in time. if clear { // Set the watermark of this txn. inner.min_txid = commit_txid; // Indicate that we evicted all to ghost/freq // stats.frequent_evict_add(inner.freq.len() as u64); // stats.recent_evict_add(inner.rec.len() as u64); // Move everything active into ghost sets. drain_ll_to_ghost!( &mut cache, inner.freq, inner.ghost_freq, inner.ghost_rec, commit_txid, &mut stats ); drain_ll_to_ghost!( &mut cache, inner.rec, inner.ghost_freq, inner.ghost_rec, commit_txid, &mut stats ); } // Why is it okay to drain the rx/tlocal and create the cache in a temporary // oversize? Because these values in the queue/tlocal are already in memory // and we are moving them to the cache, we are not actually using any more // memory (well, not significantly more). By adding everything, then evicting // we also get better and more accurate hit patterns over the cache based on what // was used. This gives us an advantage over other cache types - we can see // patterns based on temporal usage that other caches can't, at the expense that // it may take some moments for that cache pattern to sync to the main thread. self.drain_tlocal_inc( &mut cache, inner.deref_mut(), shared.deref(), tlocal, commit_txid, &mut stats, ); // drain rx until empty or time >= time. self.drain_inc_rx( &mut cache, inner.deref_mut(), shared.deref(), commit_ts, &mut stats, ); self.drain_hit_rx(&mut cache, inner.deref_mut(), commit_ts); // drain the tlocal hits into the main cache. // stats.write_hits += hit.len() as u64; // stats.write_read_ops += read_ops as u64; self.drain_tlocal_hits(&mut cache, inner.deref_mut(), commit_txid, hit); // now clean the space for each of the primary caches, evicting into the ghost sets. // * It's possible that both caches are now over-sized if rx was empty // but wr inc many items. // * p has possibly changed width, causing a balance shift // * and ghost items have been included changing ghost list sizes. // so we need to do a clean up/balance of all the list lengths. self.evict( &mut cache, inner.deref_mut(), shared.deref(), commit_txid, &mut stats, ); // self.drain_stat_rx(inner.deref_mut(), stats, commit_ts); stats.p_weight(inner.p as u64); stats.shared_max(shared.max as u64); stats.freq(inner.freq.len() as u64); stats.recent(inner.rec.len() as u64); stats.all_seen_keys(cache.len() as u64); // Indicate if we are at/above watermark, so that read/writers begin to indicate their // hit events so we can start to setup/order our arc sets correctly. // // If we drop below this again, they'll go back to just insert/remove content only mode. if init_above_watermark { if (inner.freq.len() + inner.rec.len()) < shared.watermark { self.above_watermark.store(false, Ordering::Relaxed); } } else if (inner.freq.len() + inner.rec.len()) >= shared.watermark { self.above_watermark.store(true, Ordering::Relaxed); } // commit on the wr txn. cache.commit(); // done! // eprintln!("quiesce took - {:?}", commit_ts.elapsed()); // Return the stats to the caller. stats } } impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheWriteStat, > ARCacheWriteTxn<'_, K, V, S> { /// Commit the changes of this writer, making them globally visible. This causes /// all items written to this thread's local store to become visible in the main /// cache. /// /// To rollback (abort) and operation, just do not call commit (consider std::mem::drop /// on the write transaction) pub fn commit(self) -> S { self.caller.commit( self.cache, self.tlocal, self.hit.into_inner(), self.clear.into_inner(), self.above_watermark, // self.read_ops.into_inner(), self.stats, ) } /// Clear all items of the cache. This operation does not take effect until you commit. /// After calling "clear", you may then include new items which will be stored thread /// locally until you commit. pub fn clear(&mut self) { // Mark that we have been requested to clear the cache. unsafe { let clear_ptr = self.clear.get(); *clear_ptr = true; } // Dump the hit log. unsafe { let hit_ptr = self.hit.get(); (*hit_ptr).clear(); } // Throw away any read ops we did on the old values since they'll // mess up stat numbers. self.stats.cache_clear(); /* unsafe { let op_ptr = self.read_ops.get(); (*op_ptr) = 0; } */ // Dump the thread local state. self.tlocal.clear(); // From this point any get will miss on the main cache. // Inserts are accepted. } /// Attempt to retieve a k-v pair from the cache. If it is present in the main cache OR /// the thread local cache, a `Some` is returned, else you will recieve a `None`. On a /// `None`, you must then consult the external data source that this structure is acting /// as a cache for. pub fn get(&mut self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + Ord + ?Sized, { let k_hash: u64 = self.cache.prehash(k); // Track the attempted read op /* unsafe { let op_ptr = self.read_ops.get(); (*op_ptr) += 1; } */ self.stats.cache_read(); let r: Option<&V> = if let Some(tci) = self.tlocal.get(k) { match tci { ThreadCacheItem::Present(v, _clean, _size) => { let v = v as *const _; unsafe { Some(&(*v)) } } ThreadCacheItem::Removed(_clean) => { return None; } } } else { // If we have been requested to clear, the main cache is "empty" // but we can't do that until a commit, so we just flag it and avoid. let is_cleared = unsafe { let clear_ptr = self.clear.get(); *clear_ptr }; if !is_cleared { if let Some(v) = self.cache.get_prehashed(k, k_hash) { (*v).to_vref() } else { None } } else { None } }; if r.is_some() { self.stats.cache_hit(); } // How do we track this was a hit? // Remember, we don't track misses - they are *implied* by the fact they'll trigger // an inclusion from the external system. Subsequent, any further re-hit on an // included value WILL be tracked, allowing arc to adjust appropriately. if self.above_watermark && r.is_some() { unsafe { let hit_ptr = self.hit.get(); (*hit_ptr).push(k_hash); } } r } /// If a value is in the thread local cache, retrieve it for mutation. If the value /// is not in the thread local cache, it is retrieved and cloned from the main cache. If /// the value had been marked for removal, it must first be re-inserted. /// /// # Safety /// /// Since you are mutating the state of the value, if you have sized insertions you MAY /// break this since you can change the weight of the value to be inconsistent pub fn get_mut(&mut self, k: &Q, make_dirty: bool) -> Option<&mut V> where K: Borrow, Q: Hash + Eq + Ord + ?Sized, { // If we were requested to clear, we can not copy to the tlocal cache. let is_cleared = unsafe { let clear_ptr = self.clear.get(); *clear_ptr }; // If the main cache has NOT been cleared (ie it has items) and our tlocal // does NOT contain this key, then we prime it. if !is_cleared && !self.tlocal.contains_key(k) { // Copy from the core cache into the tlocal. let k_hash: u64 = self.cache.prehash(k); if let Some(v) = self.cache.get_prehashed(k, k_hash) { if let Some((dk, dv, ds)) = v.to_kvsref() { self.tlocal.insert( dk.clone(), ThreadCacheItem::Present(dv.clone(), !make_dirty, ds), ); } } }; // Now return from the tlocal, if present, a mut pointer. match self.tlocal.get_mut(k) { Some(ThreadCacheItem::Present(v, clean, _size)) => { if make_dirty && *clean { *clean = false; } let v = v as *mut _; unsafe { Some(&mut (*v)) } } _ => None, } } /// Determine if this cache contains the following key. pub fn contains_key(&mut self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + Ord + ?Sized, { self.get(k).is_some() } /// Add a value to the cache. This may be because you have had a cache miss and /// now wish to include in the thread local storage, or because you have written /// a new value and want it to be submitted for caching. This item is marked as /// clean, IE you have synced it to whatever associated store exists. pub fn insert(&mut self, k: K, v: V) { self.tlocal.insert(k, ThreadCacheItem::Present(v, true, 1)); } /// Insert an item to the cache, with an associated weight/size factor. See also `insert` pub fn insert_sized(&mut self, k: K, v: V, size: NonZeroUsize) { self.tlocal .insert(k, ThreadCacheItem::Present(v, true, size.get())); } /// Remove this value from the thread local cache IE mask from from being /// returned until this thread performs an insert. This item is marked as clean /// IE you have synced it to whatever associated store exists. pub fn remove(&mut self, k: K) { self.tlocal.insert(k, ThreadCacheItem::Removed(true)); } /// Add a value to the cache. This may be because you have had a cache miss and /// now wish to include in the thread local storage, or because you have written /// a new value and want it to be submitted for caching. This item is marked as /// dirty, because you have *not* synced it. You MUST call iter_mut_mark_clean before calling /// `commit` on this transaction, or a panic will occur. pub fn insert_dirty(&mut self, k: K, v: V) { self.tlocal.insert(k, ThreadCacheItem::Present(v, false, 1)); } /// Insert a dirty item to the cache, with an associated weight/size factor. See also `insert_dirty` pub fn insert_dirty_sized(&mut self, k: K, v: V, size: NonZeroUsize) { self.tlocal .insert(k, ThreadCacheItem::Present(v, false, size.get())); } /// Remove this value from the thread local cache IE mask from from being /// returned until this thread performs an insert. This item is marked as /// dirty, because you have *not* synced it. You MUST call iter_mut_mark_clean before calling /// `commit` on this transaction, or a panic will occur. pub fn remove_dirty(&mut self, k: K) { self.tlocal.insert(k, ThreadCacheItem::Removed(false)); } /// Determines if dirty elements exist in this cache or not. pub fn is_dirty(&self) -> bool { self.iter_dirty().take(1).next().is_some() } /// Yields an iterator over all values that are currently dirty. As the iterator /// progresses, items will NOT be marked clean. This allows you to examine /// any currently dirty items in the cache. pub fn iter_dirty(&self) -> impl Iterator)> { self.tlocal .iter() .filter(|(_k, v)| match v { ThreadCacheItem::Present(_v, c, _size) => !c, ThreadCacheItem::Removed(c) => !c, }) .map(|(k, v)| { // Get the data. let data = match v { ThreadCacheItem::Present(v, _c, _size) => Some(v), ThreadCacheItem::Removed(_c) => None, }; (k, data) }) } /// Yields a mutable iterator over all values that are currently dirty. As the iterator /// progresses, items will NOT be marked clean. This allows you to modify and /// change any currently dirty items as required. pub fn iter_mut_dirty(&mut self) -> impl Iterator)> { self.tlocal .iter_mut() .filter(|(_k, v)| match v { ThreadCacheItem::Present(_v, c, _size) => !c, ThreadCacheItem::Removed(c) => !c, }) .map(|(k, v)| { // Get the data. let data = match v { ThreadCacheItem::Present(v, _c, _size) => Some(v), ThreadCacheItem::Removed(_c) => None, }; (k, data) }) } /// Yields an iterator over all values that are currently dirty. As the iterator /// progresses, items will be marked clean. This is where you should sync dirty /// cache content to your associated store. The iterator is K, Option, where /// the Option indicates if the item has been remove (None) or is updated (Some). pub fn iter_mut_mark_clean(&mut self) -> impl Iterator)> { self.tlocal .iter_mut() .filter(|(_k, v)| match v { ThreadCacheItem::Present(_v, c, _size) => !c, ThreadCacheItem::Removed(c) => !c, }) .map(|(k, v)| { // Mark it clean. match v { ThreadCacheItem::Present(_v, c, _size) => *c = true, ThreadCacheItem::Removed(c) => *c = true, } // Get the data. let data = match v { ThreadCacheItem::Present(v, _c, _size) => Some(v), ThreadCacheItem::Removed(_c) => None, }; (k, data) }) } /// Yield an iterator over all currently live and valid cache items. pub fn iter(&self) -> impl Iterator { self.cache.values().filter_map(|ci| match &ci { CacheItem::Rec(lln, v) => unsafe { let cii = &*((**lln).k.as_ptr()); Some((&cii.k, v)) }, CacheItem::Freq(lln, v) => unsafe { let cii = &*((**lln).k.as_ptr()); Some((&cii.k, v)) }, _ => None, }) } /// Yield an iterator over all currently live and valid items in the /// recent access list. pub fn iter_rec(&self) -> impl Iterator { self.cache.values().filter_map(|ci| match &ci { CacheItem::Rec(lln, _) => unsafe { let cii = &*((**lln).k.as_ptr()); Some(&cii.k) }, _ => None, }) } /// Yield an iterator over all currently live and valid items in the /// frequent access list. pub fn iter_freq(&self) -> impl Iterator { self.cache.values().filter_map(|ci| match &ci { CacheItem::Rec(lln, _) => unsafe { let cii = &*((**lln).k.as_ptr()); Some(&cii.k) }, _ => None, }) } #[cfg(test)] pub(crate) fn iter_ghost_rec(&self) -> impl Iterator { self.cache.values().filter_map(|ci| match &ci { CacheItem::GhostRec(lln) => unsafe { let cii = &*((**lln).k.as_ptr()); Some(&cii.k) }, _ => None, }) } #[cfg(test)] pub(crate) fn iter_ghost_freq(&self) -> impl Iterator { self.cache.values().filter_map(|ci| match &ci { CacheItem::GhostFreq(lln) => unsafe { let cii = &*((**lln).k.as_ptr()); Some(&cii.k) }, _ => None, }) } #[cfg(test)] pub(crate) fn peek_hit(&self) -> &[u64] { let hit_ptr = self.hit.get(); unsafe { &(*hit_ptr) } } #[cfg(test)] pub(crate) fn peek_cache(&self, k: &Q) -> CacheState where K: Borrow, Q: Hash + Eq + Ord, { if let Some(v) = self.cache.get(k) { (*v).to_state() } else { CacheState::None } } #[cfg(test)] pub(crate) fn peek_stat(&self) -> CStat { let inner = self.caller.inner.lock().unwrap(); let shared = self.caller.shared.read().unwrap(); CStat { max: shared.max, cache: self.cache.len(), tlocal: self.tlocal.len(), freq: inner.freq.len(), rec: inner.rec.len(), ghost_freq: inner.ghost_freq.len(), ghost_rec: inner.ghost_rec.len(), haunted: inner.haunted.len(), p: inner.p, } } // to_snapshot } impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheReadStat + Clone, > ARCacheReadTxn<'_, K, V, S> { /// Attempt to retieve a k-v pair from the cache. If it is present in the main cache OR /// the thread local cache, a `Some` is returned, else you will recieve a `None`. On a /// `None`, you must then consult the external data source that this structure is acting /// as a cache for. pub fn get(&mut self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + Ord + ?Sized, { let k_hash: u64 = self.cache.prehash(k); self.stats.cache_read(); // self.ops += 1; let mut hits = false; let mut tlocal_hits = false; let r: Option<&V> = self .tlocal .as_ref() .and_then(|cache| { cache.set.get(k).map(|v| unsafe { // Indicate a hit on the tlocal cache. tlocal_hits = true; if self.above_watermark { let _ = self.hit_queue.push(CacheHitEvent { t: Instant::now(), k_hash, }); } let v = &(**v).as_ref().v as *const _; // This discards the lifetime and repins it to the lifetime of `self`. &(*v) }) }) .or_else(|| { self.cache.get_prehashed(k, k_hash).and_then(|v| { (*v).to_vref().map(|vin| unsafe { // Indicate a hit on the main cache. hits = true; if self.above_watermark { let _ = self.hit_queue.push(CacheHitEvent { t: Instant::now(), k_hash, }); } let vin = vin as *const _; &(*vin) }) }) }); if tlocal_hits { self.stats.cache_local_hit() } else if hits { self.stats.cache_main_hit() }; r } /// Determine if this cache contains the following key. pub fn contains_key(&mut self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + Ord + ?Sized, { self.get(k).is_some() } /// Insert an item to the cache, with an associated weight/size factor. See also `insert` pub fn insert_sized(&mut self, k: K, v: V, size: NonZeroUsize) { let mut v = v; let size = size.get(); // Send a copy forward through time and space. // let _ = self.tx.try_send( if self .inc_queue .push(CacheIncludeEvent { t: Instant::now(), k: k.clone(), v: v.clone(), txid: self.cache.get_txid(), size, }) .is_ok() { self.stats.include(); } else { self.stats.failed_include(); } // We have a cache, so lets update it. if let Some(ref mut cache) = self.tlocal { self.stats.local_include(); let n = if cache.tlru.len() >= cache.read_size { let n = cache.tlru.pop(); // swap the old_key/old_val out let mut k_clone = k.clone(); unsafe { mem::swap(&mut k_clone, &mut (*n).as_mut().k); mem::swap(&mut v, &mut (*n).as_mut().v); } // remove old K from the tree: cache.set.remove(&k_clone); n } else { // Just add it! cache.tlru.append_k(ReadCacheItem { k: k.clone(), v, size, }) }; let r = cache.set.insert(k, n); // There should never be a previous value. assert!(r.is_none()); } } /// Add a value to the cache. This may be because you have had a cache miss and /// now wish to include in the thread local storage. /// /// Note that is invalid to insert an item who's key already exists in this thread local cache, /// and this is asserted IE will panic if you attempt this. It is also invalid for you to insert /// a value that does not match the source-of-truth state, IE inserting a different /// value than another thread may percieve. This is a *read* thread, so you should only be adding /// values that are relevant to this read transaction and this point in time. If you do not /// heed this warning, you may alter the fabric of time and space and have some interesting /// distortions in your data over time. pub fn insert(&mut self, k: K, v: V) { self.insert_sized(k, v, unsafe { NonZeroUsize::new_unchecked(1) }) } /// _ pub fn finish(self) -> S { let stats = self.stats.clone(); drop(self); stats } } impl< K: Hash + Eq + Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Debug + Sync + Send + 'static, S: ARCacheReadStat + Clone, > Drop for ARCacheReadTxn<'_, K, V, S> { fn drop(&mut self) { // We could make this check the queue sizes rather than blindly quiescing if self.reader_quiesce { self.caller.try_quiesce(); } } } #[cfg(test)] mod tests { use super::stats::{TraceStat, WriteCountStat}; use super::ARCache as Arc; use super::ARCacheBuilder; use super::CStat; use super::CacheState; use std::num::NonZeroUsize; #[test] fn test_cache_arc_basic() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); let mut wr_txn = arc.write(); assert!(wr_txn.get(&1).is_none()); assert!(wr_txn.peek_hit().is_empty()); wr_txn.insert(1, 1); assert!(wr_txn.get(&1) == Some(&1)); assert!(wr_txn.peek_hit().len() == 1); wr_txn.commit(); // Now we start the second txn, and see if it's in there. let mut wr_txn = arc.write(); assert!(wr_txn.peek_cache(&1) == CacheState::Rec); assert!(wr_txn.get(&1) == Some(&1)); assert!(wr_txn.peek_hit().len() == 1); wr_txn.commit(); // And now check it's moved to Freq due to the extra let wr_txn = arc.write(); assert!(wr_txn.peek_cache(&1) == CacheState::Freq); println!("{:?}", wr_txn.peek_stat()); } #[test] fn test_cache_evict() { let _ = tracing_subscriber::fmt::try_init(); println!("== 1"); let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); let stats = TraceStat {}; let mut wr_txn = arc.write_stats(stats); assert!( CStat { max: 4, cache: 0, tlocal: 0, freq: 0, rec: 0, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // In the first txn we insert 4 items. wr_txn.insert(1, 1); wr_txn.insert(2, 2); wr_txn.insert(3, 3); wr_txn.insert(4, 4); assert!( CStat { max: 4, cache: 0, tlocal: 4, freq: 0, rec: 0, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); let stats = wr_txn.commit(); // Now we start the second txn, and check the stats. println!("== 2"); let mut wr_txn = arc.write_stats(stats); assert!( CStat { max: 4, cache: 4, tlocal: 0, freq: 0, rec: 4, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // Now touch two items, this promote to the freq set. // Remember, a double hit doesn't weight any more than 1 hit. assert!(wr_txn.get(&1) == Some(&1)); assert!(wr_txn.get(&1) == Some(&1)); assert!(wr_txn.get(&2) == Some(&2)); let stats = wr_txn.commit(); // Now we start the third txn, and check the stats. println!("== 3"); let mut wr_txn = arc.write_stats(stats); assert!( CStat { max: 4, cache: 4, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // Add one more item, this will trigger an evict. wr_txn.insert(5, 5); let stats = wr_txn.commit(); // Now we start the fourth txn, and check the stats. println!("== 4"); let mut wr_txn = arc.write_stats(stats); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 5, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 1, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // And assert what's in the sets to be sure of what went where. // 🚨 Can no longer peek these with hashmap backing as the keys may // be evicted out-of-order, but the stats are correct! // Now touch the two recent items to bring them also to freq let rec_set: Vec = wr_txn.iter_rec().take(2).copied().collect(); assert!(wr_txn.get(&rec_set[0]) == Some(&rec_set[0])); assert!(wr_txn.get(&rec_set[1]) == Some(&rec_set[1])); let stats = wr_txn.commit(); // Now we start the fifth txn, and check the stats. println!("== 5"); let mut wr_txn = arc.write_stats(stats); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 5, tlocal: 0, freq: 4, rec: 0, ghost_freq: 0, ghost_rec: 1, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // And assert what's in the sets to be sure of what went where. // 🚨 Can no longer peek these with hashmap backing as the keys may // be evicted out-of-order, but the stats are correct! // Now touch the one item that's in ghost rec - this will trigger // an evict from ghost freq let grec: usize = wr_txn.iter_ghost_rec().take(1).copied().next().unwrap(); wr_txn.insert(grec, grec); assert!(wr_txn.get(&grec) == Some(&grec)); // When we add 3, we are basically issuing a demand that the rec set should be // allowed to grow as we had a potential cache miss here. let stats = wr_txn.commit(); // Now we start the sixth txn, and check the stats. println!("== 6"); let mut wr_txn = arc.write_stats(stats); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 5, tlocal: 0, freq: 3, rec: 1, ghost_freq: 1, ghost_rec: 0, haunted: 0, p: 1 } == wr_txn.peek_stat() ); // And assert what's in the sets to be sure of what went where. // 🚨 Can no longer peek these with hashmap backing as the keys may // be evicted out-of-order, but the stats are correct! assert!(wr_txn.peek_cache(&grec) == CacheState::Rec); // Right, seventh txn - we show how a cache scan doesn't cause p shifting or evict. // tl;dr - attempt to include a bunch in a scan, and it will be ignored as p is low, // and any miss on rec won't shift p unless it's in the ghost rec. wr_txn.insert(10, 10); wr_txn.insert(11, 11); wr_txn.insert(12, 12); let stats = wr_txn.commit(); println!("== 7"); let mut wr_txn = arc.write_stats(stats); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 3, rec: 1, ghost_freq: 1, ghost_rec: 3, haunted: 0, p: 1 } == wr_txn.peek_stat() ); // 🚨 Can no longer peek these with hashmap backing as the keys may // be evicted out-of-order, but the stats are correct! // Eight txn - now that we had a demand for items before, we re-demand them - this will trigger // a shift in p, causing some more to be in the rec cache. let grec_set: Vec = wr_txn.iter_ghost_rec().take(3).copied().collect(); println!("{:?}", grec_set); grec_set .iter() .for_each(|i| println!("{:?}", wr_txn.peek_cache(i))); grec_set.iter().for_each(|i| wr_txn.insert(*i, *i)); grec_set .iter() .for_each(|i| println!("{:?}", wr_txn.peek_cache(i))); wr_txn.commit(); println!("== 8"); let mut wr_txn = arc.write(); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 0, rec: 4, ghost_freq: 4, ghost_rec: 0, haunted: 0, p: 4 } == wr_txn.peek_stat() ); grec_set .iter() .for_each(|i| println!("{:?}", wr_txn.peek_cache(i))); grec_set .iter() .for_each(|i| assert!(wr_txn.peek_cache(i) == CacheState::Rec)); // Now lets go back the other way - we want freq items to come back. let gfreq_set: Vec = wr_txn.iter_ghost_freq().take(4).copied().collect(); gfreq_set.iter().for_each(|i| wr_txn.insert(*i, *i)); wr_txn.commit(); println!("== 9"); let wr_txn = arc.write(); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 4, rec: 0, ghost_freq: 0, ghost_rec: 4, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // 🚨 Can no longer peek these with hashmap backing as the keys may // be evicted out-of-order, but the stats are correct! gfreq_set .iter() .for_each(|i| assert!(wr_txn.peek_cache(i) == CacheState::Freq)); // And done! let () = wr_txn.commit(); // See what stats did // let stats = arc.view_stats(); // println!("{:?}", *stats); } #[test] fn test_cache_concurrent_basic() { // Now we want to check some basic interactions of read and write together. // Setup the cache. let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); // start a rd { let mut rd_txn = arc.read(); // add items to the rd rd_txn.insert(1, 1); rd_txn.insert(2, 2); rd_txn.insert(3, 3); rd_txn.insert(4, 4); // Should be in the tlocal // assert!(rd_txn.get(&1).is_some()); // assert!(rd_txn.get(&2).is_some()); // assert!(rd_txn.get(&3).is_some()); // assert!(rd_txn.get(&4).is_some()); // end the rd } arc.try_quiesce(); // What state is the cache now in? println!("== 2"); let wr_txn = arc.write(); println!("{:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 4, tlocal: 0, freq: 0, rec: 4, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); assert!(wr_txn.peek_cache(&1) == CacheState::Rec); assert!(wr_txn.peek_cache(&2) == CacheState::Rec); assert!(wr_txn.peek_cache(&3) == CacheState::Rec); assert!(wr_txn.peek_cache(&4) == CacheState::Rec); // Magic! Without a single write op we included items! // Lets have the read touch two items, and then add two new. // This will trigger evict on 1/2 { let mut rd_txn = arc.read(); // add items to the rd assert!(rd_txn.get(&3) == Some(&3)); assert!(rd_txn.get(&4) == Some(&4)); rd_txn.insert(5, 5); rd_txn.insert(6, 6); // end the rd } // Now commit and check the state. wr_txn.commit(); println!("== 3"); let wr_txn = arc.write(); assert!( CStat { max: 4, cache: 6, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 2, haunted: 0, p: 0 } == wr_txn.peek_stat() ); assert!(wr_txn.peek_cache(&1) == CacheState::GhostRec); assert!(wr_txn.peek_cache(&2) == CacheState::GhostRec); assert!(wr_txn.peek_cache(&3) == CacheState::Freq); assert!(wr_txn.peek_cache(&4) == CacheState::Freq); assert!(wr_txn.peek_cache(&5) == CacheState::Rec); assert!(wr_txn.peek_cache(&6) == CacheState::Rec); // Now trigger hits on 1/2 which will cause a shift in P. { let mut rd_txn = arc.read(); // add items to the rd rd_txn.insert(1, 1); rd_txn.insert(2, 2); // end the rd } wr_txn.commit(); println!("== 4"); let wr_txn = arc.write(); assert!( CStat { max: 4, cache: 6, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 2, haunted: 0, p: 2 } == wr_txn.peek_stat() ); assert!(wr_txn.peek_cache(&1) == CacheState::Rec); assert!(wr_txn.peek_cache(&2) == CacheState::Rec); assert!(wr_txn.peek_cache(&3) == CacheState::Freq); assert!(wr_txn.peek_cache(&4) == CacheState::Freq); assert!(wr_txn.peek_cache(&5) == CacheState::GhostRec); assert!(wr_txn.peek_cache(&6) == CacheState::GhostRec); // See what stats did // let stats = arc.view_stats(); // println!("stats 1: {:?}", *stats); // assert!(stats.reader_hits == 2); // assert!(stats.reader_includes == 8); // assert!(stats.reader_tlocal_includes == 8); // assert!(stats.reader_tlocal_hits == 0); } // Test edge cases that are horrifying and could destroy peoples lives // and sanity. #[test] fn test_cache_concurrent_cursed_1() { // Case 1 - It's possible for a read transaction to last for a long time, // and then have a cache include, which may cause an attempt to include // an outdated value into the cache. To handle this the haunted set exists // so that all keys and their eviction ids are always tracked for all of time // to ensure that we never incorrectly include a value that may have been updated // more recently. let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); // Start a wr let mut wr_txn = arc.write(); // Start a rd let mut rd_txn = arc.read(); // Add the value 1,1 via the wr. wr_txn.insert(1, 1); // assert 1 is not in rd. assert!(rd_txn.get(&1).is_none()); // Commit wr wr_txn.commit(); // Even after the commit, it's not in rd. assert!(rd_txn.get(&1).is_none()); // begin wr let mut wr_txn = arc.write(); // We now need to flood the cache, to cause ghost rec eviction. wr_txn.insert(10, 1); wr_txn.insert(11, 1); wr_txn.insert(12, 1); wr_txn.insert(13, 1); wr_txn.insert(14, 1); wr_txn.insert(15, 1); wr_txn.insert(16, 1); wr_txn.insert(17, 1); // commit wr wr_txn.commit(); // begin wr let wr_txn = arc.write(); // assert that 1 is haunted. assert!(wr_txn.peek_cache(&1) == CacheState::Haunted); // assert 1 is not in rd. assert!(rd_txn.get(&1).is_none()); // now that 1 is hanuted, in rd attempt to insert 1, X rd_txn.insert(1, 100); // commit wr wr_txn.commit(); // start wr let wr_txn = arc.write(); // assert that 1 is still haunted. assert!(wr_txn.peek_cache(&1) == CacheState::Haunted); // assert that 1, x is in rd. assert!(rd_txn.get(&1) == Some(&100)); // done! } #[test] fn test_cache_clear() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); // Start a wr let mut wr_txn = arc.write(); // Add a bunch of values, and touch some twice. wr_txn.insert(10, 10); wr_txn.insert(11, 11); wr_txn.insert(12, 12); wr_txn.insert(13, 13); wr_txn.insert(14, 14); wr_txn.insert(15, 15); wr_txn.insert(16, 16); wr_txn.insert(17, 17); wr_txn.commit(); // Begin a new write. let mut wr_txn = arc.write(); // Touch two values that are in the rec set. let rec_set: Vec = wr_txn.iter_rec().take(2).copied().collect(); println!("{:?}", rec_set); assert!(wr_txn.get(&rec_set[0]) == Some(&rec_set[0])); assert!(wr_txn.get(&rec_set[1]) == Some(&rec_set[1])); // commit wr wr_txn.commit(); // Begin a new write. let mut wr_txn = arc.write(); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 4, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // Clear wr_txn.clear(); // Now commit wr_txn.commit(); // Now check their states. let wr_txn = arc.write(); // See what stats did println!("stat -> {:?}", wr_txn.peek_stat()); // stat -> CStat { max: 4, cache: 8, tlocal: 0, freq: 0, rec: 0, ghost_freq: 2, ghost_rec: 6, haunted: 0, p: 0 } assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 0, rec: 0, ghost_freq: 2, ghost_rec: 6, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // let stats = arc.view_stats(); // println!("{:?}", *stats); } #[test] fn test_cache_clear_rollback() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); // Start a wr let mut wr_txn = arc.write(); // Add a bunch of values, and touch some twice. wr_txn.insert(10, 10); wr_txn.insert(11, 11); wr_txn.insert(12, 12); wr_txn.insert(13, 13); wr_txn.insert(14, 14); wr_txn.insert(15, 15); wr_txn.insert(16, 16); wr_txn.insert(17, 17); wr_txn.commit(); // Begin a new write. let mut wr_txn = arc.write(); let rec_set: Vec = wr_txn.iter_rec().take(2).copied().collect(); println!("{:?}", rec_set); let r = wr_txn.get(&rec_set[0]); println!("{:?}", r); assert!(r == Some(&rec_set[0])); assert!(wr_txn.get(&rec_set[1]) == Some(&rec_set[1])); // commit wr wr_txn.commit(); // Begin a new write. let mut wr_txn = arc.write(); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 4, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // Clear wr_txn.clear(); // Now abort the clear - should do nothing! drop(wr_txn); // Check the states, should not have changed let wr_txn = arc.write(); println!("stat -> {:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 8, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 4, haunted: 0, p: 0 } == wr_txn.peek_stat() ); } #[test] fn test_cache_clear_cursed() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); // Setup for the test // -- let mut wr_txn = arc.write(); wr_txn.insert(10, 1); wr_txn.commit(); // -- let wr_txn = arc.write(); assert!(wr_txn.peek_cache(&10) == CacheState::Rec); wr_txn.commit(); // -- // Okay, now the test starts. First, we begin a read let mut rd_txn = arc.read(); // Then while that read exists, we open a write, and conduct // a cache clear. let mut wr_txn = arc.write(); wr_txn.clear(); // Commit the clear write. wr_txn.commit(); // Now on the read, we perform a touch of an item, and we include // something that was not yet in the cache. assert!(rd_txn.get(&10) == Some(&1)); rd_txn.insert(11, 1); // Complete the read std::mem::drop(rd_txn); // Perform a cache quiesce arc.try_quiesce(); // -- // Assert that the items that we provided were NOT included, and are // in the correct states. let wr_txn = arc.write(); assert!(wr_txn.peek_cache(&10) == CacheState::GhostRec); println!("--> {:?}", wr_txn.peek_cache(&11)); assert!(wr_txn.peek_cache(&11) == CacheState::None); } #[test] fn test_cache_dirty_write() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 4) .build() .expect("Invalid cache parameters!"); let mut wr_txn = arc.write(); wr_txn.insert_dirty(10, 1); wr_txn.iter_mut_mark_clean().for_each(|(_k, _v)| {}); wr_txn.commit(); } #[test] fn test_cache_read_no_tlocal() { // Check a cache with no read local thread capacity // Setup the cache. let arc: Arc = ARCacheBuilder::default() .set_size(4, 0) .build() .expect("Invalid cache parameters!"); // start a rd { let mut rd_txn = arc.read(); // add items to the rd rd_txn.insert(1, 1); rd_txn.insert(2, 2); rd_txn.insert(3, 3); rd_txn.insert(4, 4); // end the rd // Everything should be missing frm the tlocal. assert!(rd_txn.get(&1).is_none()); assert!(rd_txn.get(&2).is_none()); assert!(rd_txn.get(&3).is_none()); assert!(rd_txn.get(&4).is_none()); } arc.try_quiesce(); // What state is the cache now in? println!("== 2"); let wr_txn = arc.write(); assert!( CStat { max: 4, cache: 4, tlocal: 0, freq: 0, rec: 4, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); assert!(wr_txn.peek_cache(&1) == CacheState::Rec); assert!(wr_txn.peek_cache(&2) == CacheState::Rec); assert!(wr_txn.peek_cache(&3) == CacheState::Rec); assert!(wr_txn.peek_cache(&4) == CacheState::Rec); // let stats = arc.view_stats(); // println!("stats 1: {:?}", *stats); // assert!(stats.reader_includes == 4); // assert!(stats.reader_tlocal_includes == 0); // assert!(stats.reader_tlocal_hits == 0); } #[derive(Clone, Debug)] struct Weighted { _i: u64, } #[test] fn test_cache_weighted() { let arc: Arc = ARCacheBuilder::default() .set_size(4, 0) .build() .expect("Invalid cache parameters!"); // let init_stats = arc.view_stats(); // println!("{:?}", *init_stats); let mut wr_txn = arc.write(); assert!( CStat { max: 4, cache: 0, tlocal: 0, freq: 0, rec: 0, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); // In the first txn we insert 2 weight 2 items. wr_txn.insert_sized(1, Weighted { _i: 1 }, NonZeroUsize::new(2).unwrap()); wr_txn.insert_sized(2, Weighted { _i: 2 }, NonZeroUsize::new(2).unwrap()); assert!( CStat { max: 4, cache: 0, tlocal: 2, freq: 0, rec: 0, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); wr_txn.commit(); // Now once commited, the proper sizes kick in. let wr_txn = arc.write(); // eprintln!("{:?}", wr_txn.peek_stat()); assert!( CStat { max: 4, cache: 2, tlocal: 0, freq: 0, rec: 4, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); wr_txn.commit(); // Check the numbers move properly. let mut wr_txn = arc.write(); wr_txn.get(&1); wr_txn.commit(); let mut wr_txn = arc.write(); assert!( CStat { max: 4, cache: 2, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 0, haunted: 0, p: 0 } == wr_txn.peek_stat() ); wr_txn.insert_sized(3, Weighted { _i: 3 }, NonZeroUsize::new(2).unwrap()); wr_txn.insert_sized(4, Weighted { _i: 4 }, NonZeroUsize::new(2).unwrap()); wr_txn.commit(); // Check the evicts let wr_txn = arc.write(); assert!( CStat { max: 4, cache: 4, tlocal: 0, freq: 2, rec: 2, ghost_freq: 0, ghost_rec: 4, haunted: 0, p: 0 } == wr_txn.peek_stat() ); wr_txn.commit(); // let stats = arc.view_stats(); // println!("{:?}", *stats); // println!("{:?}", (*stats).change_since(&*init_stats)); } #[test] fn test_cache_stats_reload() { let _ = tracing_subscriber::fmt::try_init(); // Make a cache let arc: Arc = ARCacheBuilder::default() .set_size(4, 0) .build() .expect("Invalid cache parameters!"); let stats = WriteCountStat::default(); let mut wr_txn = arc.write_stats(stats); wr_txn.insert(1, 1); let stats = wr_txn.commit(); tracing::trace!("stats 1: {:?}", stats); } #[test] fn test_cache_mut_inplace() { // Make a cache let arc: Arc = ARCacheBuilder::default() .set_size(4, 0) .build() .expect("Invalid cache parameters!"); let mut wr_txn = arc.write(); assert!(wr_txn.get_mut(&1, false).is_none()); // It was inserted, can mutate. This is the tlocal present state. wr_txn.insert(1, 1); { let mref = wr_txn.get_mut(&1, false).unwrap(); *mref = 2; } assert!(wr_txn.get_mut(&1, false) == Some(&mut 2)); wr_txn.commit(); // It's in the main cache, can mutate immediately and the tlocal is primed. let mut wr_txn = arc.write(); { let mref = wr_txn.get_mut(&1, false).unwrap(); *mref = 3; } assert!(wr_txn.get_mut(&1, false) == Some(&mut 3)); wr_txn.commit(); // Marked for remove, can not mut. let mut wr_txn = arc.write(); wr_txn.remove(1); assert!(wr_txn.get_mut(&1, false).is_none()); wr_txn.commit(); } } concread-0.4.6/src/arcache/stats.rs000064400000000000000000000114571046102023000153020ustar 00000000000000use std::fmt::Debug; /// Write statistics for ARCache pub trait ARCacheWriteStat { // RW phase trackers /// _ fn cache_clear(&mut self) {} /// _ fn cache_read(&mut self) {} /// _ fn cache_hit(&mut self) {} // Commit phase trackers /// _ fn include(&mut self, _k: &K) {} /// _ fn include_haunted(&mut self, k: &K) { self.include(k) } /// _ fn modify(&mut self, _k: &K) {} /// _ fn ghost_frequent_revive(&mut self, _k: &K) {} /// _ fn ghost_recent_revive(&mut self, _k: &K) {} /// _ fn evict_from_recent(&mut self, _k: &K) {} /// _ fn evict_from_frequent(&mut self, _k: &K) {} // Size of things after the operation. /// _ fn p_weight(&mut self, _p: u64) {} /// _ fn shared_max(&mut self, _i: u64) {} /// _ fn freq(&mut self, _i: u64) {} /// _ fn recent(&mut self, _i: u64) {} /// _ fn all_seen_keys(&mut self, _i: u64) {} } /// Read statistics for ARCache pub trait ARCacheReadStat { /// _ fn cache_read(&mut self) {} /// _ fn cache_local_hit(&mut self) {} /// _ fn cache_main_hit(&mut self) {} /// _ fn include(&mut self) {} /// _ fn failed_include(&mut self) {} /// _ fn local_include(&mut self) {} } impl ARCacheWriteStat for () {} impl ARCacheReadStat for () {} #[derive(Debug)] /// A stat collector that allows tracing the keys of items that are changed /// during writes and quiesce phases. pub struct TraceStat {} impl ARCacheWriteStat for TraceStat where K: Debug, { /// _ fn include(&mut self, k: &K) { tracing::trace!(?k, "include"); } /// _ fn include_haunted(&mut self, k: &K) { tracing::trace!(?k, "include_haunted"); } /// _ fn modify(&mut self, k: &K) { tracing::trace!(?k, "modify"); } /// _ fn ghost_frequent_revive(&mut self, k: &K) { tracing::trace!(?k, "ghost_frequent_revive"); } /// _ fn ghost_recent_revive(&mut self, k: &K) { tracing::trace!(?k, "ghost_recent_revive"); } /// _ fn evict_from_recent(&mut self, k: &K) { tracing::trace!(?k, "evict_from_recent"); } /// _ fn evict_from_frequent(&mut self, k: &K) { tracing::trace!(?k, "evict_from_frequent"); } } /// A simple track of counters from the cache #[derive(Debug, Default)] pub struct WriteCountStat { /// The number of attempts to read from the cache pub read_ops: u64, /// The number of cache hits during this operation pub read_hits: u64, /// The current cache weight between recent and frequent. pub p_weight: u64, /// The maximum number of items in the shared cache. pub shared_max: u64, /// The number of items in the frequent set at this point in time. pub freq: u64, /// The number of items in the recent set at this point in time. pub recent: u64, /// The number of total keys seen through the cache's lifetime. pub all_seen_keys: u64, } impl ARCacheWriteStat for WriteCountStat { /// _ fn cache_clear(&mut self) { self.read_ops = 0; self.read_hits = 0; } /// _ fn cache_read(&mut self) { self.read_ops += 1; } /// _ fn cache_hit(&mut self) { self.read_hits += 1; } /// _ fn p_weight(&mut self, p: u64) { self.p_weight = p; } /// _ fn shared_max(&mut self, i: u64) { self.shared_max = i; } /// _ fn freq(&mut self, i: u64) { self.freq = i; } /// _ fn recent(&mut self, i: u64) { self.recent = i; } /// _ fn all_seen_keys(&mut self, i: u64) { self.all_seen_keys = i; } } /// A simple track of counters from the cache #[derive(Debug, Default, Clone)] pub struct ReadCountStat { /// The number of attempts to read from the cache pub read_ops: u64, /// The number of cache hits on the thread local cache pub local_hit: u64, /// The number of cache hits on the main cache pub main_hit: u64, /// The number of queued inclusions to the main cache pub include: u64, /// The number of failed inclusions to the main cache pub failed_include: u64, /// The number of inclusions to the thread local cache pub local_include: u64, } /// _ impl ARCacheReadStat for ReadCountStat { /// _ fn cache_read(&mut self) { self.read_ops += 1; } /// _ fn cache_local_hit(&mut self) { self.local_hit += 1; } /// _ fn cache_main_hit(&mut self) { self.main_hit += 1; } /// _ fn include(&mut self) { self.include += 1; } /// _ fn failed_include(&mut self) { self.failed_include += 1; } /// _ fn local_include(&mut self) { self.local_include += 1; } } concread-0.4.6/src/arcache/traits.rs000064400000000000000000000006421046102023000154440ustar 00000000000000//! Traits for allowing customised behaviours for ARCache /// A trait that allows custom weighting of items in the arc. pub trait ArcWeight { /// Return the weight of this item. This value MAY be dynamic /// as the cache copies this for it's internal tracking purposes fn arc_weight(&self) -> usize; } impl ArcWeight for T { #[inline] default fn arc_weight(&self) -> usize { 1 } } concread-0.4.6/src/bptree/asynch.rs000064400000000000000000000231771046102023000153260ustar 00000000000000//! Async `BptreeMap` - See the documentation for the sync `BptreeMap` #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell_async::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl BptreeMap { /// Initiate a read transaction for the tree, concurrent to any /// other readers or writers. pub async fn read<'x>(&'x self) -> BptreeMapReadTxn<'x, K, V> { let inner = self.inner.read().await; BptreeMapReadTxn { inner } } /// Initiate a write transaction for the tree, exclusive to this /// writer, and concurrently to all existing reads. pub async fn write<'x>(&'x self) -> BptreeMapWriteTxn<'x, K, V> { let inner = self.inner.write().await; BptreeMapWriteTxn { inner } } } impl BptreeMapWriteTxn<'_, K, V> { /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub async fn commit(self) { self.inner.commit().await; } } #[cfg(feature = "serde")] impl Serialize for BptreeMapReadTxn<'_, K, V> where K: Serialize + Clone + Ord + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let mut state = serializer.serialize_map(Some(self.len()))?; for (key, val) in self.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for BptreeMap where K: Deserialize<'de> + Clone + Ord + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::BptreeMap; use crate::internals::bptree::node::{assert_released, L_CAPACITY}; // use rand::prelude::*; use rand::seq::SliceRandom; #[tokio::test] async fn test_bptree2_map_basic_write() { let bptree: BptreeMap = BptreeMap::new(); { let mut bpwrite = bptree.write().await; // We should be able to insert. bpwrite.insert(0, 0); bpwrite.insert(1, 1); assert!(bpwrite.get(&0) == Some(&0)); assert!(bpwrite.get(&1) == Some(&1)); bpwrite.insert(2, 2); bpwrite.commit().await; // println!("commit"); } { // Do a clear, but roll it back. let mut bpwrite = bptree.write().await; bpwrite.clear(); // DO NOT commit, this triggers the rollback. // println!("post clear"); } { let bpwrite = bptree.write().await; assert!(bpwrite.get(&0) == Some(&0)); assert!(bpwrite.get(&1) == Some(&1)); // println!("fin write"); } std::mem::drop(bptree); assert_released(); } #[tokio::test] async fn test_bptree2_map_cursed_get_mut() { let bptree: BptreeMap = BptreeMap::new(); { let mut w = bptree.write().await; w.insert(0, 0); w.commit().await; } let r1 = bptree.read().await; { let mut w = bptree.write().await; let cursed_zone = w.get_mut(&0).unwrap(); *cursed_zone = 1; // Correctly fails to work as it's a second borrow, which isn't // possible once w.remove occurs // w.remove(&0); // *cursed_zone = 2; w.commit().await; } let r2 = bptree.read().await; assert!(r1.get(&0) == Some(&0)); assert!(r2.get(&0) == Some(&1)); /* // Correctly fails to compile. PHEW! let fail = { let mut w = bptree.write(); w.get_mut(&0).unwrap() }; */ std::mem::drop(r1); std::mem::drop(r2); std::mem::drop(bptree); assert_released(); } #[tokio::test] async fn test_bptree2_map_from_iter_1() { let ins: Vec = (0..(L_CAPACITY << 4)).collect(); let map = BptreeMap::from_iter(ins.into_iter().map(|v| (v, v))); { let w = map.write().await; assert!(w.verify()); println!("{:?}", w.tree_density()); } // assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); std::mem::drop(map); assert_released(); } #[tokio::test] async fn test_bptree2_map_from_iter_2() { let mut rng = rand::thread_rng(); let mut ins: Vec = (0..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let map = BptreeMap::from_iter(ins.into_iter().map(|v| (v, v))); { let w = map.write().await; assert!(w.verify()); // w.compact_force(); assert!(w.verify()); // assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); } std::mem::drop(map); assert_released(); } async fn bptree_map_basic_concurrency(lower: usize, upper: usize) { // Create a map let map = BptreeMap::new(); // add values { let mut w = map.write().await; w.extend((0..lower).map(|v| (v, v))); w.commit().await; } // read let r = map.read().await; assert!(r.len() == lower); for i in 0..lower { assert!(r.contains_key(&i)) } // Check a second write doesn't interfere { let mut w = map.write().await; w.extend((lower..upper).map(|v| (v, v))); w.commit().await; } assert!(r.len() == lower); // But a new write can see let r2 = map.read().await; assert!(r2.len() == upper); for i in 0..upper { assert!(r2.contains_key(&i)) } // Now drain the tree, and the reader should be unaffected. { let mut w = map.write().await; for i in 0..upper { assert!(w.remove(&i).is_some()) } w.commit().await; } // All consistent! assert!(r.len() == lower); assert!(r2.len() == upper); for i in 0..upper { assert!(r2.contains_key(&i)) } let r3 = map.read().await; // println!("{:?}", r3.len()); assert!(r3.is_empty()); std::mem::drop(r); std::mem::drop(r2); std::mem::drop(r3); std::mem::drop(map); assert_released(); } #[tokio::test] async fn test_bptree2_map_acb_order() { // Need to ensure that txns are dropped in order. // Add data, enouugh to cause a split. All data should be *2 let map = BptreeMap::new(); // add values { let mut w = map.write().await; w.extend((0..(L_CAPACITY * 2)).map(|v| (v * 2, v * 2))); w.commit().await; } let ro_txn_a = map.read().await; // New write, add 1 val { let mut w = map.write().await; w.insert(1, 1); w.commit().await; } let ro_txn_b = map.read().await; // ro_txn_b now owns nodes from a // New write, update a value { let mut w = map.write().await; w.insert(1, 10001); w.commit().await; } let ro_txn_c = map.read().await; // ro_txn_c // Drop ro_txn_b assert!(ro_txn_b.verify()); std::mem::drop(ro_txn_b); // Are both still valid? assert!(ro_txn_a.verify()); assert!(ro_txn_c.verify()); // Drop remaining std::mem::drop(ro_txn_a); std::mem::drop(ro_txn_c); std::mem::drop(map); assert_released(); } #[tokio::test] async fn test_bptree2_map_weird_txn_behaviour() { let map: BptreeMap = BptreeMap::new(); let mut wr = map.write().await; let rd = map.read().await; wr.insert(1, 1); assert!(rd.get(&1).is_none()); wr.commit().await; assert!(rd.get(&1).is_none()); } #[tokio::test] #[cfg_attr(miri, ignore)] async fn test_bptree2_map_basic_concurrency_small() { bptree_map_basic_concurrency(100, 200).await } #[tokio::test] #[cfg_attr(miri, ignore)] async fn test_bptree2_map_basic_concurrency_large() { bptree_map_basic_concurrency(10_000, 20_000).await } #[cfg(feature = "serde")] #[tokio::test] async fn test_bptree2_serialize_deserialize() { let map: BptreeMap = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&map.read().await).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let map: BptreeMap = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = map.read().await.iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/bptree/impl.rs000064400000000000000000000345211046102023000147750ustar 00000000000000use crate::internals::bptree::cursor::CursorReadOps; use crate::internals::bptree::cursor::{CursorRead, CursorWrite, SuperBlock}; use crate::internals::bptree::iter::{Iter, RangeIter, KeyIter, ValueIter}; use crate::internals::bptree::mutiter::RangeMutIter; use crate::internals::lincowcell::LinCowCellCapable; use std::borrow::Borrow; use std::fmt::Debug; use std::iter::FromIterator; use std::ops::RangeBounds; /// A concurrently readable map based on a modified B+Tree structure. /// /// This structure can be used in locations where you would otherwise us /// `RwLock` or `Mutex`. /// /// Generally, the concurrent [HashMap](crate::hashmap::HashMap) is a better /// choice unless you require ordered key storage. /// /// This is a concurrently readable structure, meaning it has transactional /// properties. Writers are serialised (one after the other), and readers /// can exist in parallel with stable views of the structure at a point /// in time. /// /// This is achieved through the use of [COW](https://en.wikipedia.org/wiki/Copy-on-write) /// or [MVCC](https://en.wikipedia.org/wiki/Multiversion_concurrency_control). /// As a write occurs, subsets of the tree are cloned into the writer thread /// and then commited later. This may cause memory usage to increase in exchange /// for a gain in concurrent behaviour. /// /// Transactions can be rolled-back (aborted) without penalty by dropping /// the `BptreeMapWriteTxn` without calling `commit()`. pub struct BptreeMap where K: Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCell, CursorRead, CursorWrite>, } unsafe impl Send for BptreeMap { } unsafe impl Sync for BptreeMap { } /// An active read transaction over a [BptreeMap]. The data in this tree /// is guaranteed to not change and will remain consistent for the life /// of this transaction. pub struct BptreeMapReadTxn<'a, K, V> where K: Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellReadTxn<'a, SuperBlock, CursorRead, CursorWrite>, } unsafe impl Send for BptreeMapReadTxn<'_, K, V> { } unsafe impl Sync for BptreeMapReadTxn<'_, K, V> { } /// An active write transaction for a [BptreeMap]. The data in this tree /// may be modified exclusively through this transaction without affecting /// readers. The write may be rolledback/aborted by dropping this guard /// without calling `commit()`. Once `commit()` is called, readers will be /// able to access and percieve changes in new transactions. pub struct BptreeMapWriteTxn<'a, K, V> where K: Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellWriteTxn<'a, SuperBlock, CursorRead, CursorWrite>, } enum SnapshotType<'a, K, V> where K: Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { R(&'a CursorRead), W(&'a CursorWrite), } /// A point-in-time snapshot of the tree from within a read OR write. This is /// useful for building other transactional types ontop of this structure, as /// you need a way to downcast both BptreeMapReadTxn or BptreeMapWriteTxn to /// a singular reader type for a number of get_inner() style patterns. /// /// This snapshot IS safe within the read thread due to the nature of the /// implementation borrowing the inner tree to prevent mutations within the /// same thread while the read snapshot is open. pub struct BptreeMapReadSnapshot<'a, K, V> where K: Ord + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: SnapshotType<'a, K, V>, } impl Default for BptreeMap { fn default() -> Self { Self::new() } } impl BptreeMap { /// Construct a new concurrent tree pub fn new() -> Self { // I acknowledge I understand what is required to make this safe. BptreeMap { inner: LinCowCell::new(unsafe { SuperBlock::new() }), } } /// Attempt to create a new write, returns None if another writer /// already exists. pub fn try_write(&self) -> Option> { self.inner .try_write() .map(|inner| BptreeMapWriteTxn { inner }) } } impl FromIterator<(K, V)> for BptreeMap { fn from_iter>(iter: I) -> Self { let mut new_sblock = unsafe { SuperBlock::new() }; let prev = new_sblock.create_reader(); let mut cursor = new_sblock.create_writer(); cursor.extend(iter); let _ = new_sblock.pre_commit(cursor, &prev); BptreeMap { inner: LinCowCell::new(new_sblock), } } } impl Extend<(K, V)> for BptreeMapWriteTxn<'_, K, V> { fn extend>(&mut self, iter: I) { self.inner.as_mut().extend(iter); } } impl BptreeMapWriteTxn<'_, K, V> { // == RO methods /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord + ?Sized, { self.inner.as_ref().search(k) } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Ord + ?Sized, { self.inner.as_ref().contains_key(k) } /// returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } /// Iterate over a range of values pub fn range(&self, range: R) -> RangeIter where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { self.inner.as_ref().range(range) } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Retrieve the first (minimum) key-value pair from the map if it exists. pub fn first_key_value(&self) -> Option<(&K, &V)> { self.inner.as_ref().first_key_value() } /// Retrieve the last (maximum) key-value pair from the map if it exists. pub fn last_key_value(&self) -> Option<(&K, &V)> { self.inner.as_ref().last_key_value() } // (adv) keys // (adv) values #[allow(unused)] pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } // == RW methods /// Reset this tree to an empty state. As this is within the transaction this /// change only takes effect once commited. pub fn clear(&mut self) { self.inner.as_mut().clear() } /// Insert or update a value by key. If the value previously existed it is returned /// as `Some(V)`. If the value did not previously exist this returns `None`. pub fn insert(&mut self, k: K, v: V) -> Option { self.inner.as_mut().insert(k, v) } /// Remove a key if it exists in the tree. If the value exists, we return it as `Some(V)`, /// and if it did not exist, we return `None` pub fn remove(&mut self, k: &K) -> Option { self.inner.as_mut().remove(k) } // split_off /* pub fn split_off_gte(&mut self, key: &K) -> BptreeMap { unimplemented!(); } */ /// Remove all values less than (but not including) key from the map. pub fn split_off_lt(&mut self, key: &K) { self.inner.as_mut().split_off_lt(key) } // ADVANCED // append (join two sets) /// Get a mutable reference to a value in the tree. This is correctly, and /// safely cloned before you attempt to mutate the value, isolating it from /// other transactions. pub fn get_mut(&mut self, key: &K) -> Option<&mut V> { self.inner.as_mut().get_mut_ref(key) } /// Iterate over a mutable range of values pub fn range_mut(&mut self, range: R) -> RangeMutIter where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { self.inner.as_mut().range_mut(range) } // iter_mut // entry #[cfg(test)] pub(crate) fn tree_density(&self) -> (usize, usize) { self.inner.as_ref().tree_density() } #[cfg(test)] pub(crate) fn verify(&self) -> bool { self.inner.as_ref().verify() } /// Create a read-snapshot of the current tree. This does NOT guarantee the tree may /// not be mutated during the read, so you MUST guarantee that no functions of the /// write txn are called while this snapshot is active. pub fn to_snapshot(&self) -> BptreeMapReadSnapshot { BptreeMapReadSnapshot { inner: SnapshotType::W(self.inner.as_ref()), } } } impl BptreeMapReadTxn<'_, K, V> { /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord + ?Sized, { self.inner.as_ref().search(k) } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Ord + ?Sized, { self.inner.as_ref().contains_key(k) } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } #[allow(unused)] pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } /// Iterate over a range of values pub fn range(&self, range: R) -> RangeIter where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { self.inner.as_ref().range(range) } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Retrieve the first (minimum) key-value pair from the map if it exists. pub fn first_key_value(&self) -> Option<(&K, &V)> { self.inner.as_ref().first_key_value() } /// Retrieve the last (maximum) key-value pair from the map if it exists. pub fn last_key_value(&self) -> Option<(&K, &V)> { self.inner.as_ref().last_key_value() } /// Create a read-snapshot of the current tree. /// As this is the read variant, it IS safe, and guaranteed the tree will not change. pub fn to_snapshot(&self) -> BptreeMapReadSnapshot { BptreeMapReadSnapshot { inner: SnapshotType::R(self.inner.as_ref()), } } #[cfg(test)] #[allow(dead_code)] pub(crate) fn verify(&self) -> bool { self.inner.as_ref().verify() } } impl BptreeMapReadSnapshot<'_, K, V> { /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord + ?Sized, { match self.inner { SnapshotType::R(inner) => inner.search(k), SnapshotType::W(inner) => inner.search(k), } } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Ord + ?Sized, { match self.inner { SnapshotType::R(inner) => inner.contains_key(k), SnapshotType::W(inner) => inner.contains_key(k), } } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { match self.inner { SnapshotType::R(inner) => inner.len(), SnapshotType::W(inner) => inner.len(), } } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.len() == 0 } /// Iterate over a range of values pub fn range(&self, range: R) -> RangeIter where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { match self.inner { SnapshotType::R(inner) => inner.range(range), SnapshotType::W(inner) => inner.range(range), } } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { match self.inner { SnapshotType::R(inner) => inner.kv_iter(), SnapshotType::W(inner) => inner.kv_iter(), } } /// Iterator over &K pub fn values(&self) -> ValueIter { match self.inner { SnapshotType::R(inner) => inner.v_iter(), SnapshotType::W(inner) => inner.v_iter(), } } /// Iterator over &V pub fn keys(&self) -> KeyIter { match self.inner { SnapshotType::R(inner) => inner.k_iter(), SnapshotType::W(inner) => inner.k_iter(), } } } concread-0.4.6/src/bptree/mod.rs000064400000000000000000000532551046102023000146200ustar 00000000000000//! See the documentation for [BptreeMap](crate::bptree::BptreeMap) #[cfg(feature = "asynch")] pub mod asynch; #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl BptreeMap { /// Initiate a read transaction for the tree, concurrent to any /// other readers or writers. pub fn read(&self) -> BptreeMapReadTxn { let inner = self.inner.read(); BptreeMapReadTxn { inner } } /// Initiate a write transaction for the tree, exclusive to this /// writer, and concurrently to all existing reads. pub fn write(&self) -> BptreeMapWriteTxn { let inner = self.inner.write(); BptreeMapWriteTxn { inner } } } impl BptreeMapWriteTxn<'_, K, V> { /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub fn commit(self) { self.inner.commit(); } } #[cfg(feature = "serde")] impl Serialize for BptreeMap where K: Serialize + Clone + Ord + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let txn = self.read(); let mut state = serializer.serialize_map(Some(txn.len()))?; for (key, val) in txn.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for BptreeMap where K: Deserialize<'de> + Clone + Ord + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::BptreeMap; use crate::internals::bptree::node::{assert_released, L_CAPACITY}; // use rand::prelude::*; use rand::seq::SliceRandom; #[test] fn test_bptree2_map_basic_write() { let bptree: BptreeMap = BptreeMap::new(); { let mut bpwrite = bptree.write(); // We should be able to insert. bpwrite.insert(0, 0); bpwrite.insert(1, 1); assert!(bpwrite.get(&0) == Some(&0)); assert!(bpwrite.get(&1) == Some(&1)); bpwrite.insert(2, 2); bpwrite.commit(); // println!("commit"); } { // Do a clear, but roll it back. let mut bpwrite = bptree.write(); bpwrite.clear(); // DO NOT commit, this triggers the rollback. // println!("post clear"); } { let bpwrite = bptree.write(); assert!(bpwrite.get(&0) == Some(&0)); assert!(bpwrite.get(&1) == Some(&1)); // println!("fin write"); } std::mem::drop(bptree); assert_released(); } #[test] fn test_bptree2_map_cursed_get_mut() { let bptree: BptreeMap = BptreeMap::new(); { let mut w = bptree.write(); w.insert(0, 0); w.commit(); } let r1 = bptree.read(); { let mut w = bptree.write(); let cursed_zone = w.get_mut(&0).unwrap(); *cursed_zone = 1; // Correctly fails to work as it's a second borrow, which isn't // possible once w.remove occurs // w.remove(&0); // *cursed_zone = 2; w.commit(); } let r2 = bptree.read(); assert!(r1.get(&0) == Some(&0)); assert!(r2.get(&0) == Some(&1)); /* // Correctly fails to compile. PHEW! let fail = { let mut w = bptree.write(); w.get_mut(&0).unwrap() }; */ std::mem::drop(r1); std::mem::drop(r2); std::mem::drop(bptree); assert_released(); } #[test] fn test_bptree2_map_from_iter_1() { let ins: Vec = (0..(L_CAPACITY << 4)).collect(); let map = BptreeMap::from_iter(ins.into_iter().map(|v| (v, v))); { let w = map.write(); assert!(w.verify()); println!("{:?}", w.tree_density()); } // assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); std::mem::drop(map); assert_released(); } #[test] fn test_bptree2_map_from_iter_2() { let mut rng = rand::thread_rng(); let mut ins: Vec = (0..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let map = BptreeMap::from_iter(ins.into_iter().map(|v| (v, v))); { let w = map.write(); assert!(w.verify()); // w.compact_force(); assert!(w.verify()); // assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); } std::mem::drop(map); assert_released(); } fn bptree_map_basic_concurrency(lower: usize, upper: usize) { // Create a map let map = BptreeMap::new(); // add values { let mut w = map.write(); w.extend((0..lower).map(|v| (v, v))); w.commit(); } // read let r = map.read(); assert!(r.len() == lower); for i in 0..lower { assert!(r.contains_key(&i)) } // Check a second write doesn't interfere { let mut w = map.write(); w.extend((lower..upper).map(|v| (v, v))); w.commit(); } assert!(r.len() == lower); // But a new write can see let r2 = map.read(); assert!(r2.len() == upper); for i in 0..upper { assert!(r2.contains_key(&i)) } // Now drain the tree, and the reader should be unaffected. { let mut w = map.write(); for i in 0..upper { assert!(w.remove(&i).is_some()) } w.commit(); } // All consistent! assert!(r.len() == lower); assert!(r2.len() == upper); for i in 0..upper { assert!(r2.contains_key(&i)) } let r3 = map.read(); // println!("{:?}", r3.len()); assert!(r3.is_empty()); std::mem::drop(r); std::mem::drop(r2); std::mem::drop(r3); std::mem::drop(map); assert_released(); } #[test] fn test_bptree2_map_acb_order() { // Need to ensure that txns are dropped in order. // Add data, enouugh to cause a split. All data should be *2 let map = BptreeMap::new(); // add values { let mut w = map.write(); w.extend((0..(L_CAPACITY * 2)).map(|v| (v * 2, v * 2))); w.commit(); } let ro_txn_a = map.read(); // New write, add 1 val { let mut w = map.write(); w.insert(1, 1); w.commit(); } let ro_txn_b = map.read(); // ro_txn_b now owns nodes from a // New write, update a value { let mut w = map.write(); w.insert(1, 10001); w.commit(); } let ro_txn_c = map.read(); // ro_txn_c // Drop ro_txn_b assert!(ro_txn_b.verify()); std::mem::drop(ro_txn_b); // Are both still valid? assert!(ro_txn_a.verify()); assert!(ro_txn_c.verify()); // Drop remaining std::mem::drop(ro_txn_a); std::mem::drop(ro_txn_c); std::mem::drop(map); assert_released(); } #[test] fn test_bptree2_map_weird_txn_behaviour() { let map: BptreeMap = BptreeMap::new(); let mut wr = map.write(); let rd = map.read(); wr.insert(1, 1); assert!(rd.get(&1).is_none()); wr.commit(); assert!(rd.get(&1).is_none()); } #[test] #[cfg_attr(miri, ignore)] fn test_bptree2_map_basic_concurrency_small() { bptree_map_basic_concurrency(100, 200) } #[test] #[cfg_attr(miri, ignore)] fn test_bptree2_map_basic_concurrency_large() { bptree_map_basic_concurrency(10_000, 20_000) } #[test] fn test_bptree2_map_rangeiter_1() { let ins: Vec = (0..100).collect(); let map = BptreeMap::from_iter(ins.into_iter().map(|v| (v, v))); { let w = map.write(); assert!(w.range(0..100).count() == 100); assert!(w.range(25..100).count() == 75); assert!(w.range(0..75).count() == 75); assert!(w.range(25..75).count() == 50); } // assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); std::mem::drop(map); assert_released(); } /* #[test] fn test_bptree2_map_write_compact() { let mut rng = rand::thread_rng(); let insa: Vec = (0..(L_CAPACITY << 4)).collect(); let map = BptreeMap::from_iter(insa.into_iter().map(|v| (v, v))); let mut w = map.write(); // Created linearly, should not need compact assert!(w.compact() == false); assert!(w.verify()); assert!(w.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); // Even in reverse, we shouldn't need it ... let insb: Vec = (0..(L_CAPACITY << 4)).collect(); let bmap = BptreeMap::from_iter(insb.into_iter().rev().map(|v| (v, v))); let mut bw = bmap.write(); assert!(bw.compact() == false); assert!(bw.verify()); // Assert the density is "best" assert!(bw.tree_density() == ((L_CAPACITY << 4), (L_CAPACITY << 4))); // Random however, may. let mut insc: Vec = (0..(L_CAPACITY << 4)).collect(); insc.shuffle(&mut rng); let cmap = BptreeMap::from_iter(insc.into_iter().map(|v| (v, v))); let mut cw = cmap.write(); let (_n, d1) = cw.tree_density(); cw.compact_force(); assert!(cw.verify()); let (_n, d2) = cw.tree_density(); assert!(d2 <= d1); } */ /* use std::sync::atomic::{AtomicUsize, Ordering}; use crossbeam_utils::thread::scope; use rand::Rng; const MAX_TARGET: usize = 210_000; #[test] fn test_bptree2_map_thread_stress() { let start = time::now(); let reader_completions = AtomicUsize::new(0); // Setup a tree with some initial data. let map: BptreeMap = BptreeMap::from_iter( (0..10_000).map(|v| (v, v)) ); // now setup the threads. scope(|scope| { let mref = ↦ let rref = &reader_completions; let _readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { println!("Started reader ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let m_read = mref.read(); proceed = ! m_read.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; for i in v1..r1 { m_read.get(&i); } assert!(m_read.verify()); rref.fetch_add(1, Ordering::Relaxed); } println!("Closing reader ..."); }) }) .collect(); let _writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { println!("Started writer ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let mut m_write = mref.write(); proceed = ! m_write.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; let v2 = rng.gen_range(1, 19) * 10_000; let r2 = v2 + 10_000; for i in v1..r1 { m_write.insert(i, i); } for i in v2..r2 { m_write.remove(&i); } m_write.commit(); } println!("Closing writer ..."); }) }) .collect(); let _complete = scope.spawn(move || { let mut last_value = 200_000; while last_value < MAX_TARGET { let mut m_write = mref.write(); last_value += 1; if last_value % 1000 == 0 { println!("{:?}", last_value); } m_write.insert(last_value, last_value); assert!(m_write.verify()); m_write.commit(); } }); }); let end = time::now(); print!("BptreeMap MT create :{} reader completions :{}", end - start, reader_completions.load(Ordering::Relaxed)); // Done! } #[test] fn test_std_mutex_btreemap_thread_stress() { use std::collections::BTreeMap; use std::sync::Mutex; let start = time::now(); let reader_completions = AtomicUsize::new(0); // Setup a tree with some initial data. let map: Mutex> = Mutex::new(BTreeMap::from_iter( (0..10_000).map(|v| (v, v)) )); // now setup the threads. scope(|scope| { let mref = ↦ let rref = &reader_completions; let _readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { println!("Started reader ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let m_read = mref.lock().unwrap(); proceed = ! m_read.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; for i in v1..r1 { m_read.get(&i); } rref.fetch_add(1, Ordering::Relaxed); } println!("Closing reader ..."); }) }) .collect(); let _writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { println!("Started writer ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let mut m_write = mref.lock().unwrap(); proceed = ! m_write.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; let v2 = rng.gen_range(1, 19) * 10_000; let r2 = v2 + 10_000; for i in v1..r1 { m_write.insert(i, i); } for i in v2..r2 { m_write.remove(&i); } } println!("Closing writer ..."); }) }) .collect(); let _complete = scope.spawn(move || { let mut last_value = 200_000; while last_value < MAX_TARGET { let mut m_write = mref.lock().unwrap(); last_value += 1; if last_value % 1000 == 0 { println!("{:?}", last_value); } m_write.insert(last_value, last_value); } }); }); let end = time::now(); print!("Mutex MT create :{} reader completions :{}", end - start, reader_completions.load(Ordering::Relaxed)); // Done! } #[test] fn test_std_rwlock_btreemap_thread_stress() { use std::collections::BTreeMap; use std::sync::RwLock; let start = time::now(); let reader_completions = AtomicUsize::new(0); // Setup a tree with some initial data. let map: RwLock> = RwLock::new(BTreeMap::from_iter( (0..10_000).map(|v| (v, v)) )); // now setup the threads. scope(|scope| { let mref = ↦ let rref = &reader_completions; let _readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { println!("Started reader ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let m_read = mref.read().unwrap(); proceed = ! m_read.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; for i in v1..r1 { m_read.get(&i); } rref.fetch_add(1, Ordering::Relaxed); } println!("Closing reader ..."); }) }) .collect(); let _writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { println!("Started writer ..."); let mut rng = rand::thread_rng(); let mut proceed = true; while proceed { let mut m_write = mref.write().unwrap(); proceed = ! m_write.contains_key(&MAX_TARGET); // Get a random number. // Add 10_000 * random // Remove 10_000 * random let v1 = rng.gen_range(1, 18) * 10_000; let r1 = v1 + 10_000; let v2 = rng.gen_range(1, 19) * 10_000; let r2 = v2 + 10_000; for i in v1..r1 { m_write.insert(i, i); } for i in v2..r2 { m_write.remove(&i); } } println!("Closing writer ..."); }) }) .collect(); let _complete = scope.spawn(move || { let mut last_value = 200_000; while last_value < MAX_TARGET { let mut m_write = mref.write().unwrap(); last_value += 1; if last_value % 1000 == 0 { println!("{:?}", last_value); } m_write.insert(last_value, last_value); } }); }); let end = time::now(); print!("RwLock MT create :{} reader completions :{}", end - start, reader_completions.load(Ordering::Relaxed)); // Done! } */ #[cfg(feature = "serde")] #[test] fn test_bptreee2_serialize_deserialize() { let map: BptreeMap = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&map).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let map: BptreeMap = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = map.read().iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/cowcell/asynch.rs000064400000000000000000000225211046102023000154650ustar 00000000000000//! Async CowCell - A concurrently readable cell with Arc //! //! See `CowCell` for more details. use std::ops::{Deref, DerefMut}; use std::sync::Arc; use tokio::sync::{Mutex, MutexGuard}; /// A conncurrently readable async cell. /// /// This structure behaves in a similar manner to a `RwLock`. However unlike /// a `RwLock`, writes and parallel reads can be performed at the same time. This /// means readers and writers do no block either other. Writers are serialised. /// /// To achieve this a form of "copy-on-write" (or for Rust, clone on write) is /// used. As a write transaction begins, we clone the existing data to a new /// location that is capable of being mutated. /// /// Readers are guaranteed that the content of the `CowCell` will live as long /// as the read transaction is open, and will be consistent for the duration /// of the transaction. There can be an "unlimited" number of readers in parallel /// accessing different generations of data of the `CowCell`. /// /// Writers are serialised and are guaranteed they have exclusive write access /// to the structure. #[derive(Debug)] pub struct CowCell { write: Mutex<()>, active: Mutex>, } /// A `CowCell` Write Transaction handle. /// /// This allows mutation of the content of the `CowCell` without blocking or /// affecting current readers. /// /// Changes are only stored in this structure until you call commit. To abort/ /// rollback a change, don't call commit and allow the write transaction to /// be dropped. This causes the `CowCell` to unlock allowing the next writer /// to proceed. pub struct CowCellWriteTxn<'a, T> { // Hold open the guard, and initiate the copy to here. work: Option, read: Arc, // This way we know who to contact for updating our data .... caller: &'a CowCell, _guard: MutexGuard<'a, ()>, } /// A `CowCell` Read Transaction handle. /// /// This allows safe reading of the value within the `CowCell`, that allows /// no mutation of the value, and without blocking writers. #[derive(Debug)] pub struct CowCellReadTxn(Arc); impl Clone for CowCellReadTxn { fn clone(&self) -> Self { CowCellReadTxn(self.0.clone()) } } impl CowCell where T: Clone, { /// Create a new `CowCell` for storing type `T`. `T` must implement `Clone` /// to enable clone-on-write. pub fn new(data: T) -> Self { CowCell { write: Mutex::new(()), active: Mutex::new(Arc::new(data)), } } /// Begin a read transaction, returning a read guard. The content of /// the read guard is guaranteed to be consistent for the life time of the /// read - even if writers commit during. pub async fn read<'x>(&'x self) -> CowCellReadTxn { let rwguard = self.active.lock().await; CowCellReadTxn(rwguard.clone()) // rwguard ends here } /// Begin a write transaction, returning a write guard. The content of the /// write is only visible to this thread, and is not visible to any reader /// until `commit()` is called. pub async fn write<'x>(&'x self) -> CowCellWriteTxn<'x, T> { /* Take the exclusive write lock first */ let mguard = self.write.lock().await; // We delay copying until the first get_mut. let read = { let rwguard = self.active.lock().await; rwguard.clone() }; /* Now build the write struct */ CowCellWriteTxn { work: None, read, caller: self, _guard: mguard, } } /// Attempt to create a write transaction. If it fails, and err /// is returned. On success the `Ok(guard)` is returned. See also /// `write(&self)` pub async fn try_write<'x>(&'x self) -> Option> { /* Take the exclusive write lock first */ if let Ok(mguard) = self.write.try_lock() { // We delay copying until the first get_mut. let read: Arc<_> = { let rwguard = self.active.lock().await; rwguard.clone() }; /* Now build the write struct */ Some(CowCellWriteTxn { work: None, read, caller: self, _guard: mguard, }) } else { None } } async fn commit(&self, newdata: Option) { if let Some(nd) = newdata { let mut rwguard = self.active.lock().await; let new_inner = Arc::new(nd); // now over-write the last value in the mutex. *rwguard = new_inner; } // If not some, we do nothing. // Done } } impl Deref for CowCellReadTxn { type Target = T; #[inline] fn deref(&self) -> &T { &self.0 } } impl CowCellWriteTxn<'_, T> where T: Clone, { /// Access a mutable pointer of the data in the `CowCell`. This data is only /// visible to the write transaction object in this thread, until you call /// `commit()`. #[inline(always)] pub fn get_mut(&mut self) -> &mut T { if self.work.is_none() { let mut data: Option = Some((*self.read).clone()); std::mem::swap(&mut data, &mut self.work); // Should be the none we previously had. debug_assert!(data.is_none()) } self.work.as_mut().expect("can not fail") } /// Commit the changes made in this write transactions to the `CowCell`. /// This will consume the transaction so no further changes can be made /// after this is called. Not calling this in a block, is equivalent to /// an abort/rollback of the transaction. pub async fn commit(self) { /* Write our data back to the CowCell */ self.caller.commit(self.work).await; } } impl Deref for CowCellWriteTxn<'_, T> where T: Clone, { type Target = T; #[inline(always)] fn deref(&self) -> &T { match &self.work { Some(v) => v, None => &self.read, } } } impl DerefMut for CowCellWriteTxn<'_, T> where T: Clone, { #[inline(always)] fn deref_mut(&mut self) -> &mut T { self.get_mut() } } #[cfg(test)] mod tests { use super::CowCell; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; #[tokio::test] async fn test_deref_mut() { let data: i64 = 0; let cc = CowCell::new(data); { /* Take a write txn */ let mut cc_wrtxn = cc.write().await; *cc_wrtxn = 1; cc_wrtxn.commit().await; } let cc_rotxn = cc.read().await; assert_eq!(*cc_rotxn, 1); } #[tokio::test] async fn test_try_write() { let data: i64 = 0; let cc = CowCell::new(data); /* Take a write txn */ let cc_wrtxn_a = cc.try_write().await; assert!(cc_wrtxn_a.is_some()); /* Because we already hold the writ, the second is guaranteed to fail */ let cc_wrtxn_a = cc.try_write().await; assert!(cc_wrtxn_a.is_none()); } #[tokio::test] async fn test_simple_create() { let data: i64 = 0; let cc = CowCell::new(data); let cc_rotxn_a = cc.read().await; assert_eq!(*cc_rotxn_a, 0); { /* Take a write txn */ let mut cc_wrtxn = cc.write().await; /* Get the data ... */ { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(*mut_ptr, 0); *mut_ptr = 1; assert_eq!(*mut_ptr, 1); } assert_eq!(*cc_rotxn_a, 0); let cc_rotxn_b = cc.read().await; assert_eq!(*cc_rotxn_b, 0); /* The write txn and it's lock is dropped here */ cc_wrtxn.commit().await; } /* Start a new txn and see it's still good */ let cc_rotxn_c = cc.read().await; assert_eq!(*cc_rotxn_c, 1); assert_eq!(*cc_rotxn_a, 0); } static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } impl Drop for TestGcWrapper { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } async fn test_gc_operation_thread(cc: Arc>>) { while GC_COUNT.load(Ordering::Acquire) < 50 { // thread::sleep(std::time::Duration::from_millis(200)); { let mut cc_wrtxn = cc.write().await; { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit().await; } } } #[tokio::test] #[cfg_attr(miri, ignore)] async fn test_gc_operation() { GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = Arc::new(CowCell::new(data)); let _ = tokio::join!( tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), ); assert!(GC_COUNT.load(Ordering::Acquire) >= 50); } } concread-0.4.6/src/cowcell/mod.rs000064400000000000000000000301741046102023000147620ustar 00000000000000//! CowCell - A concurrently readable cell with Arc //! //! A CowCell can be used in place of a `RwLock`. Readers are guaranteed that //! the data will not change during the lifetime of the read. Readers do //! not block writers, and writers do not block readers. Writers are serialised //! same as the write in a RwLock. //! //! This is the `Arc` collected implementation. `Arc` is slightly slower than `EbrCell` //! but has better behaviour with very long running read operations, and more //! accurate memory reclaim behaviour. #[cfg(feature = "asynch")] pub mod asynch; use std::ops::{Deref, DerefMut}; use std::sync::Arc; use std::sync::{Mutex, MutexGuard}; /// A conncurrently readable cell. /// /// This structure behaves in a similar manner to a `RwLock`. However unlike /// a `RwLock`, writes and parallel reads can be performed at the same time. This /// means readers and writers do no block either other. Writers are serialised. /// /// To achieve this a form of "copy-on-write" (or for Rust, clone on write) is /// used. As a write transaction begins, we clone the existing data to a new /// location that is capable of being mutated. /// /// Readers are guaranteed that the content of the `CowCell` will live as long /// as the read transaction is open, and will be consistent for the duration /// of the transaction. There can be an "unlimited" number of readers in parallel /// accessing different generations of data of the `CowCell`. /// /// Writers are serialised and are guaranteed they have exclusive write access /// to the structure. /// /// # Examples /// ``` /// use concread::cowcell::CowCell; /// /// let data: i64 = 0; /// let cowcell = CowCell::new(data); /// /// // Begin a read transaction /// let read_txn = cowcell.read(); /// assert_eq!(*read_txn, 0); /// { /// // Now create a write, and commit it. /// let mut write_txn = cowcell.write(); /// *write_txn = 1; /// // Commit the change /// write_txn.commit(); /// } /// // Show the previous generation still reads '0' /// assert_eq!(*read_txn, 0); /// let new_read_txn = cowcell.read(); /// // And a new read transaction has '1' /// assert_eq!(*new_read_txn, 1); /// ``` #[derive(Debug)] pub struct CowCell { write: Mutex<()>, active: Mutex>, } /// A `CowCell` Write Transaction handle. /// /// This allows mutation of the content of the `CowCell` without blocking or /// affecting current readers. /// /// Changes are only stored in this structure until you call commit. To abort/ /// rollback a change, don't call commit and allow the write transaction to /// be dropped. This causes the `CowCell` to unlock allowing the next writer /// to proceed. pub struct CowCellWriteTxn<'a, T> { // Hold open the guard, and initiate the copy to here. work: Option, read: Arc, // This way we know who to contact for updating our data .... caller: &'a CowCell, _guard: MutexGuard<'a, ()>, } /// A `CowCell` Read Transaction handle. /// /// This allows safe reading of the value within the `CowCell`, that allows /// no mutation of the value, and without blocking writers. #[derive(Debug)] pub struct CowCellReadTxn(Arc); impl Clone for CowCellReadTxn { fn clone(&self) -> Self { CowCellReadTxn(self.0.clone()) } } impl CowCell where T: Clone, { /// Create a new `CowCell` for storing type `T`. `T` must implement `Clone` /// to enable clone-on-write. pub fn new(data: T) -> Self { CowCell { write: Mutex::new(()), active: Mutex::new(Arc::new(data)), } } /// Begin a read transaction, returning a read guard. The content of /// the read guard is guaranteed to be consistent for the life time of the /// read - even if writers commit during. pub fn read(&self) -> CowCellReadTxn { let rwguard = self.active.lock().unwrap(); CowCellReadTxn(rwguard.clone()) // rwguard ends here } /// Begin a write transaction, returning a write guard. The content of the /// write is only visible to this thread, and is not visible to any reader /// until `commit()` is called. pub fn write(&self) -> CowCellWriteTxn { /* Take the exclusive write lock first */ let mguard = self.write.lock().unwrap(); // We delay copying until the first get_mut. let read = { let rwguard = self.active.lock().unwrap(); rwguard.clone() }; /* Now build the write struct */ CowCellWriteTxn { work: None, read, caller: self, _guard: mguard, } } /// Attempt to create a write transaction. If it fails, and err /// is returned. On success the `Ok(guard)` is returned. See also /// `write(&self)` pub fn try_write(&self) -> Option> { /* Take the exclusive write lock first */ self.write.try_lock().ok().map(|mguard| { // We delay copying until the first get_mut. let read = { let rwguard = self.active.lock().unwrap(); rwguard.clone() }; /* Now build the write struct */ CowCellWriteTxn { work: None, read, caller: self, _guard: mguard, } }) } fn commit(&self, newdata: Option) { if let Some(nd) = newdata { let mut rwguard = self.active.lock().unwrap(); let new_inner = Arc::new(nd); // now over-write the last value in the mutex. *rwguard = new_inner; } // If not some, we do nothing. // Done } } impl Deref for CowCellReadTxn { type Target = T; #[inline] fn deref(&self) -> &T { &self.0 } } impl CowCellWriteTxn<'_, T> where T: Clone, { /// Access a mutable pointer of the data in the `CowCell`. This data is only /// visible to the write transaction object in this thread, until you call /// `commit()`. #[inline(always)] pub fn get_mut(&mut self) -> &mut T { if self.work.is_none() { let mut data: Option = Some((*self.read).clone()); std::mem::swap(&mut data, &mut self.work); // Should be the none we previously had. debug_assert!(data.is_none()) } self.work.as_mut().expect("can not fail") } /// Commit the changes made in this write transactions to the `CowCell`. /// This will consume the transaction so no further changes can be made /// after this is called. Not calling this in a block, is equivalent to /// an abort/rollback of the transaction. pub fn commit(self) { /* Write our data back to the CowCell */ self.caller.commit(self.work); } } impl Deref for CowCellWriteTxn<'_, T> where T: Clone, { type Target = T; #[inline(always)] fn deref(&self) -> &T { match &self.work { Some(v) => v, None => &self.read, } } } impl DerefMut for CowCellWriteTxn<'_, T> where T: Clone, { #[inline(always)] fn deref_mut(&mut self) -> &mut T { self.get_mut() } } #[cfg(test)] mod tests { use super::CowCell; use std::sync::atomic::{AtomicUsize, Ordering}; use std::thread::scope; #[test] fn test_deref_mut() { let data: i64 = 0; let cc = CowCell::new(data); { /* Take a write txn */ let mut cc_wrtxn = cc.write(); *cc_wrtxn = 1; cc_wrtxn.commit(); } let cc_rotxn = cc.read(); assert_eq!(*cc_rotxn, 1); } #[test] fn test_try_write() { let data: i64 = 0; let cc = CowCell::new(data); /* Take a write txn */ let cc_wrtxn_a = cc.try_write(); assert!(cc_wrtxn_a.is_some()); /* Because we already hold the writ, the second is guaranteed to fail */ let cc_wrtxn_a = cc.try_write(); assert!(cc_wrtxn_a.is_none()); } #[test] fn test_simple_create() { let data: i64 = 0; let cc = CowCell::new(data); let cc_rotxn_a = cc.read(); assert_eq!(*cc_rotxn_a, 0); { /* Take a write txn */ let mut cc_wrtxn = cc.write(); /* Get the data ... */ { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(*mut_ptr, 0); *mut_ptr = 1; assert_eq!(*mut_ptr, 1); } assert_eq!(*cc_rotxn_a, 0); let cc_rotxn_b = cc.read(); assert_eq!(*cc_rotxn_b, 0); /* The write txn and it's lock is dropped here */ cc_wrtxn.commit(); } /* Start a new txn and see it's still good */ let cc_rotxn_c = cc.read(); assert_eq!(*cc_rotxn_c, 1); assert_eq!(*cc_rotxn_a, 0); } const MAX_TARGET: i64 = 2000; #[test] #[cfg_attr(miri, ignore)] fn test_multithread_create() { let start = time::Instant::now(); // Create the new cowcell. let data: i64 = 0; let cc = CowCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { let mut last_value: i64 = 0; while last_value < MAX_TARGET { let cc_rotxn = cc_ref.read(); { assert!(*cc_rotxn >= last_value); last_value = *cc_rotxn; } } }) }) .collect(); let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { let mut last_value: i64 = 0; while last_value < MAX_TARGET { let mut cc_wrtxn = cc_ref.write(); { let mut_ptr = cc_wrtxn.get_mut(); assert!(*mut_ptr >= last_value); last_value = *mut_ptr; *mut_ptr += 1; } cc_wrtxn.commit(); } }) }) .collect(); for h in readers.into_iter() { h.join().unwrap(); } for h in writers.into_iter() { h.join().unwrap(); } true })); let end = time::Instant::now(); print!("Arc MT create :{:?} ", end - start); } static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } impl Drop for TestGcWrapper { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } fn test_gc_operation_thread(cc: &CowCell>) { while GC_COUNT.load(Ordering::Acquire) < 50 { // thread::sleep(std::time::Duration::from_millis(200)); { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } } } #[test] #[cfg_attr(miri, ignore)] fn test_gc_operation() { GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = CowCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { test_gc_operation_thread(cc_ref); }) }) .collect(); for h in writers.into_iter() { h.join().unwrap(); } true })); assert!(GC_COUNT.load(Ordering::Acquire) >= 50); } } concread-0.4.6/src/ebrcell/mod.rs000064400000000000000000000407311046102023000147420ustar 00000000000000//! EbrCell - A concurrently readable cell with Ebr //! //! An [EbrCell] can be used in place of a `RwLock`. Readers are guaranteed that //! the data will not change during the lifetime of the read. Readers do //! not block writers, and writers do not block readers. Writers are serialised //! same as the write in a `RwLock`. //! //! This is the Ebr collected implementation. //! Ebr is the crossbeam-epoch based reclaim system for async memory //! garbage collection. Ebr is faster than `Arc`, //! but long transactions can cause the memory usage to grow very quickly //! before a garbage reclaim. This is a space time trade, where you gain //! performance at the expense of delaying garbage collection. Holding Ebr //! reads for too long may impact garbage collection of other epoch structures //! or crossbeam library components. //! If you need accurate memory reclaim, use the Arc (`CowCell`) implementation. use crossbeam_epoch as epoch; use crossbeam_epoch::{Atomic, Guard, Owned}; use std::sync::atomic::Ordering::{Acquire, Relaxed, Release}; use std::mem; use std::ops::{Deref, DerefMut}; use std::sync::{Mutex, MutexGuard}; /// An `EbrCell` Write Transaction handle. /// /// This allows mutation of the content of the `EbrCell` without blocking or /// affecting current readers. /// /// Changes are only stored in the structure until you call commit: to /// abort a change, don't call commit and allow the write transaction to /// go out of scope. This causes the `EbrCell` to unlock allowing other /// writes to proceed. pub struct EbrCellWriteTxn<'a, T: 'static + Clone + Send + Sync> { data: Option, // This way we know who to contact for updating our data .... caller: &'a EbrCell, _guard: MutexGuard<'a, ()>, } impl EbrCellWriteTxn<'_, T> where T: Clone + Sync + Send + 'static, { /// Access a mutable pointer of the data in the `EbrCell`. This data is only /// visible to this write transaction object in this thread until you call /// 'commit'. pub fn get_mut(&mut self) -> &mut T { self.data.as_mut().unwrap() } /// Commit the changes in this write transaction to the `EbrCell`. This will /// consume the transaction so that further changes can not be made to it /// after this function is called. pub fn commit(mut self) { /* Write our data back to the EbrCell */ // Now make a new dummy element, and swap it into the mutex // This fixes up ownership of some values for lifetimes. let mut element: Option = None; mem::swap(&mut element, &mut self.data); self.caller.commit(element); } } impl Deref for EbrCellWriteTxn<'_, T> where T: Clone + Sync + Send, { type Target = T; #[inline] fn deref(&self) -> &T { self.data.as_ref().unwrap() } } impl DerefMut for EbrCellWriteTxn<'_, T> where T: Clone + Sync + Send, { fn deref_mut(&mut self) -> &mut T { self.data.as_mut().unwrap() } } /// A concurrently readable cell. /// /// This structure behaves in a similar manner to a `RwLock>`. However /// unlike a read-write lock, writes and parallel reads can be performed /// simultaneously. This means writes do not block reads or reads do not /// block writes. /// /// To achieve this a form of "copy-on-write" (or for Rust, clone on write) is /// used. As a write transaction begins, we clone the existing data to a new /// location that is capable of being mutated. /// /// Readers are guaranteed that the content of the `EbrCell` will live as long /// as the read transaction is open, and will be consistent for the duration /// of the transaction. There can be an "unlimited" number of readers in parallel /// accessing different generations of data of the `EbrCell`. /// /// Data that is copied is garbage collected using the crossbeam-epoch library. /// /// Writers are serialised and are guaranteed they have exclusive write access /// to the structure. /// /// # Examples /// ``` /// use concread::ebrcell::EbrCell; /// /// let data: i64 = 0; /// let ebrcell = EbrCell::new(data); /// /// // Begin a read transaction /// let read_txn = ebrcell.read(); /// assert_eq!(*read_txn, 0); /// { /// // Now create a write, and commit it. /// let mut write_txn = ebrcell.write(); /// *write_txn = 1; /// // Commit the change /// write_txn.commit(); /// } /// // Show the previous generation still reads '0' /// assert_eq!(*read_txn, 0); /// let new_read_txn = ebrcell.read(); /// // And a new read transaction has '1' /// assert_eq!(*new_read_txn, 1); /// ``` #[derive(Debug)] pub struct EbrCell { write: Mutex<()>, active: Atomic, } impl EbrCell where T: Clone + Sync + Send + 'static, { /// Create a new `EbrCell` storing type `T`. `T` must implement `Clone`. pub fn new(data: T) -> Self { EbrCell { write: Mutex::new(()), active: Atomic::new(data), } } /// Begin a write transaction, returning a write guard. pub fn write(&self) -> EbrCellWriteTxn { /* Take the exclusive write lock first */ let mguard = self.write.lock().unwrap(); /* Do an atomic load of the current value */ let guard = epoch::pin(); let cur_shared = self.active.load(Acquire, &guard); /* Now build the write struct, we'll discard the pin shortly! */ EbrCellWriteTxn { /* This is the 'copy' of the copy on write! */ data: Some(unsafe { cur_shared.deref().clone() }), caller: self, _guard: mguard, } } /// Attempt to begin a write transaction. If it's already held, /// `None` is returned. pub fn try_write(&self) -> Option> { self.write.try_lock().ok().map(|mguard| { let guard = epoch::pin(); let cur_shared = self.active.load(Acquire, &guard); /* Now build the write struct, we'll discard the pin shortly! */ EbrCellWriteTxn { /* This is the 'copy' of the copy on write! */ data: Some(unsafe { cur_shared.deref().clone() }), caller: self, _guard: mguard, } }) } /// This is an internal compontent of the commit cycle. It takes ownership /// of the value stored in the writetxn, and commits it to the main EbrCell /// safely. /// /// In theory you could use this as a "lock free" version, but you don't /// know if you are trampling a previous change, so it's private and we /// let the writetxn struct serialise and protect this interface. fn commit(&self, element: Option) { // Yield a read txn? let guard = epoch::pin(); // Load the previous data ready for unlinking let prev_data = self.active.load(Acquire, &guard); // Make the data Owned, and set it in the active. let owned_data: Owned = Owned::new(element.unwrap()); let _shared_data = self .active .compare_exchange(prev_data, owned_data, Release, Relaxed, &guard); // Finally, set our previous data for cleanup. unsafe { guard.defer_destroy(prev_data) }; // Then return the current data with a readtxn. Do we need a new guard scope? } /// Begin a read transaction. The returned [`EbrCellReadTxn'] guarantees /// the data lives long enough via crossbeam's Epoch type. When this is /// dropped the data *may* be freed at some point in the future. pub fn read(&self) -> EbrCellReadTxn { let guard = epoch::pin(); // This option returns None on null pointer, but we can never be null // as we have to init with data, and all replacement ALWAYS gives us // a ptr, so unwrap? let cur = { let c = self.active.load(Acquire, &guard); c.as_raw() }; EbrCellReadTxn { _guard: guard, data: cur, } } } impl Drop for EbrCell where T: Clone + Sync + Send + 'static, { fn drop(&mut self) { // Right, we are dropping! Everything is okay here *except* // that we need to tell our active data to be unlinked, else it may // be dropped "unsafely". let guard = epoch::pin(); let prev_data = self.active.load(Acquire, &guard); unsafe { guard.defer_destroy(prev_data) }; } } /// A read transaction. This stores a reference to the data from the main /// `EbrCell`, and guarantees it is alive for the duration of the read. // #[derive(Debug)] pub struct EbrCellReadTxn { _guard: Guard, data: *const T, } impl Deref for EbrCellReadTxn { type Target = T; /// Derference and access the value within the read transaction. fn deref(&self) -> &T { unsafe { &(*self.data) } } } #[cfg(test)] mod tests { use std::sync::atomic::{AtomicUsize, Ordering}; use super::EbrCell; use std::thread::scope; #[test] fn test_deref_mut() { let data: i64 = 0; let cc = EbrCell::new(data); { /* Take a write txn */ let mut cc_wrtxn = cc.write(); *cc_wrtxn = 1; cc_wrtxn.commit(); } let cc_rotxn = cc.read(); assert_eq!(*cc_rotxn, 1); } #[test] fn test_try_write() { let data: i64 = 0; let cc = EbrCell::new(data); /* Take a write txn */ let cc_wrtxn_a = cc.try_write(); assert!(cc_wrtxn_a.is_some()); /* Because we already hold the writ, the second is guaranteed to fail */ let cc_wrtxn_a = cc.try_write(); assert!(cc_wrtxn_a.is_none()); } #[test] fn test_simple_create() { let data: i64 = 0; let cc = EbrCell::new(data); let cc_rotxn_a = cc.read(); assert_eq!(*cc_rotxn_a, 0); { /* Take a write txn */ let mut cc_wrtxn = cc.write(); /* Get the data ... */ { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(*mut_ptr, 0); *mut_ptr = 1; assert_eq!(*mut_ptr, 1); } assert_eq!(*cc_rotxn_a, 0); let cc_rotxn_b = cc.read(); assert_eq!(*cc_rotxn_b, 0); /* The write txn and it's lock is dropped here */ cc_wrtxn.commit(); } /* Start a new txn and see it's still good */ let cc_rotxn_c = cc.read(); assert_eq!(*cc_rotxn_c, 1); assert_eq!(*cc_rotxn_a, 0); } const MAX_TARGET: i64 = 2000; #[test] #[cfg_attr(miri, ignore)] fn test_multithread_create() { let start = time::Instant::now(); // Create the new ebrcell. let data: i64 = 0; let cc = EbrCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { let mut last_value: i64 = 0; while last_value < MAX_TARGET { let cc_rotxn = cc_ref.read(); { assert!(*cc_rotxn >= last_value); last_value = *cc_rotxn; } } }) }) .collect(); let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { let mut last_value: i64 = 0; while last_value < MAX_TARGET { let mut cc_wrtxn = cc_ref.write(); { let mut_ptr = cc_wrtxn.get_mut(); assert!(*mut_ptr >= last_value); last_value = *mut_ptr; *mut_ptr += 1; } cc_wrtxn.commit(); } }) }) .collect(); for h in readers.into_iter() { h.join().unwrap(); } for h in writers.into_iter() { h.join().unwrap(); } true })); let end = time::Instant::now(); print!("Ebr MT create :{:?} ", end - start); } static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } impl Drop for TestGcWrapper { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } fn test_gc_operation_thread(cc: &EbrCell>) { while GC_COUNT.load(Ordering::Acquire) < 50 { // thread::sleep(std::time::Duration::from_millis(200)); { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } } } #[test] #[cfg_attr(miri, ignore)] fn test_gc_operation() { GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = EbrCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { test_gc_operation_thread(cc_ref); }) }) .collect(); for h in writers.into_iter() { h.join().unwrap(); } true })); assert!(GC_COUNT.load(Ordering::Acquire) >= 50); } } #[cfg(test)] mod tests_linear { use std::sync::atomic::{AtomicUsize, Ordering}; use super::EbrCell; static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } impl Drop for TestGcWrapper { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } #[test] fn test_gc_operation_linear() { /* * Test if epoch drops in order (or ordered enough). * A property required for b+tree with cow is that txn's * are dropped in order so that tree states are not invalidated. * * A -> B -> C * * If B is dropped, it invalidates nodes copied from A * causing the tree to corrupt txn A (and maybe C). * * EBR due to it's design while it won't drop in order, * it drops generationally, in blocks. This is probably * good enough. This means that: * * A -> B -> C .. -> X -> Y * * EBR will drop in blocks such as: * * | g1 | g2 | live | * A -> B -> C .. -> X -> Y * * This test is "small" but asserts a basic sanity of drop * ordering, but it's not conclusive for b+tree. More testing * (likely multi-thread strees test) is needed, or analysis from * other EBR developers. */ GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = EbrCell::new(data); // Open a read A. let cc_rotxn_a = cc.read(); // open a write, change and commit { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } // open a read B. let cc_rotxn_b = cc.read(); // open a write, change and commit { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } // open a read C let cc_rotxn_c = cc.read(); assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop B drop(cc_rotxn_b); // gc count should be 0. assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop C drop(cc_rotxn_c); // gc count should be 0 assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop A drop(cc_rotxn_a); // gc count should be 2 (A + B, C is still live) assert!(GC_COUNT.load(Ordering::Acquire) <= 2); } } concread-0.4.6/src/hashmap/asynch.rs000064400000000000000000000145251046102023000154630ustar 00000000000000//! HashMap - A async locked concurrently readable HashMap //! //! For more, see [`HashMap`] #![allow(clippy::implicit_hasher)] #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell_async::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl HashMap { /// Construct a new concurrent hashmap pub fn new() -> Self { // I acknowledge I understand what is required to make this safe. HashMap { inner: LinCowCell::new(unsafe { SuperBlock::new() }), } } /// Initiate a read transaction for the Hashmap, concurrent to any /// other readers or writers. pub async fn read<'x>(&'x self) -> HashMapReadTxn<'x, K, V> { let inner = self.inner.read().await; HashMapReadTxn { inner } } /// Initiate a write transaction for the map, exclusive to this /// writer, and concurrently to all existing reads. pub async fn write<'x>(&'x self) -> HashMapWriteTxn<'x, K, V> { let inner = self.inner.write().await; HashMapWriteTxn { inner } } /// Attempt to create a new write, returns None if another writer /// already exists. pub fn try_write(&self) -> Option> { self.inner .try_write() .map(|inner| HashMapWriteTxn { inner }) } } impl HashMapWriteTxn<'_, K, V> { /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub async fn commit(self) { self.inner.commit().await; } } #[cfg(feature = "serde")] impl Serialize for HashMapReadTxn<'_, K, V> where K: Serialize + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let mut state = serializer.serialize_map(Some(self.len()))?; for (key, val) in self.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for HashMap where K: Deserialize<'de> + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::HashMap; #[tokio::test] async fn test_hashmap_basic_write() { let hmap: HashMap = HashMap::new(); let mut hmap_write = hmap.write().await; hmap_write.insert(10, 10); hmap_write.insert(15, 15); assert!(hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(!hmap_write.contains_key(&20)); assert!(hmap_write.get(&10) == Some(&10)); { let v = hmap_write.get_mut(&10).unwrap(); *v = 11; } assert!(hmap_write.get(&10) == Some(&11)); assert!(hmap_write.remove(&10).is_some()); assert!(!hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(hmap_write.remove(&30).is_none()); hmap_write.clear(); assert!(!hmap_write.contains_key(&10)); assert!(!hmap_write.contains_key(&15)); hmap_write.commit().await; } #[tokio::test] async fn test_hashmap_basic_read_write() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write().await; hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); hmap_w1.commit().await; let hmap_r1 = hmap.read().await; assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let mut hmap_w2 = hmap.write().await; hmap_w2.insert(20, 20); hmap_w2.commit().await; assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let hmap_r2 = hmap.read().await; assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[tokio::test] async fn test_hashmap_basic_read_snapshot() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write().await; hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); let snap = hmap_w1.to_snapshot(); assert!(snap.contains_key(&10)); assert!(snap.contains_key(&15)); assert!(!snap.contains_key(&20)); } #[tokio::test] async fn test_hashmap_basic_iter() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write().await; assert!(hmap_w1.iter().count() == 0); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); assert!(hmap_w1.iter().count() == 2); } #[tokio::test] async fn test_hashmap_from_iter() { let hmap: HashMap = vec![(10, 10), (15, 15), (20, 20)].into_iter().collect(); let hmap_r2 = hmap.read().await; assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[cfg(feature = "serde")] #[tokio::test] async fn test_hashmap_serialize_deserialize() { let hmap: HashMap = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&hmap.read().await).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let hmap: HashMap = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = hmap.read().await.iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/hashmap/impl.rs000064400000000000000000000270141046102023000151340ustar 00000000000000use crate::internals::hashmap::cursor::CursorReadOps; use crate::internals::hashmap::cursor::{CursorRead, CursorWrite, SuperBlock}; use crate::internals::hashmap::iter::*; use std::borrow::Borrow; use crate::internals::lincowcell::LinCowCellCapable; use std::fmt::Debug; use std::hash::Hash; use std::iter::FromIterator; /// A concurrently readable map based on a modified B+Tree structured with fast /// parallel hashed key lookup. /// /// This structure can be used in locations where you would otherwise us /// `RwLock` or `Mutex`. /// /// This is a concurrently readable structure, meaning it has transactional /// properties. Writers are serialised (one after the other), and readers /// can exist in parallel with stable views of the structure at a point /// in time. /// /// This is achieved through the use of COW or MVCC. As a write occurs /// subsets of the tree are cloned into the writer thread and then commited /// later. This may cause memory usage to increase in exchange for a gain /// in concurrent behaviour. /// /// Transactions can be rolled-back (aborted) without penalty by dropping /// the `HashMapWriteTxn` without calling `commit()`. pub struct HashMap where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCell, CursorRead, CursorWrite>, } unsafe impl Send for HashMap { } unsafe impl Sync for HashMap { } /// An active read transaction over a `HashMap`. The data in this tree /// is guaranteed to not change and will remain consistent for the life /// of this transaction. pub struct HashMapReadTxn<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellReadTxn<'a, SuperBlock, CursorRead, CursorWrite>, } /// An active write transaction for a `HashMap`. The data in this tree /// may be modified exclusively through this transaction without affecting /// readers. The write may be rolledback/aborted by dropping this guard /// without calling `commit()`. Once `commit()` is called, readers will be /// able to access and percieve changes in new transactions. pub struct HashMapWriteTxn<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellWriteTxn<'a, SuperBlock, CursorRead, CursorWrite>, } enum SnapshotType<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { R(&'a CursorRead), W(&'a CursorWrite), } /// A point-in-time snapshot of the tree from within a read OR write. This is /// useful for building other transactional types ontop of this structure, as /// you need a way to downcast both HashMapReadTxn or HashMapWriteTxn to /// a singular reader type for a number of get_inner() style patterns. /// /// This snapshot IS safe within the read thread due to the nature of the /// implementation borrowing the inner tree to prevent mutations within the /// same thread while the read snapshot is open. pub struct HashMapReadSnapshot<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: SnapshotType<'a, K, V>, } impl Default for HashMap { fn default() -> Self { Self::new() } } impl FromIterator<(K, V)> for HashMap { fn from_iter>(iter: I) -> Self { let mut new_sblock = unsafe { SuperBlock::new() }; let prev = new_sblock.create_reader(); let mut cursor = new_sblock.create_writer(); cursor.extend(iter); let _ = new_sblock.pre_commit(cursor, &prev); HashMap { inner: LinCowCell::new(new_sblock), } } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > Extend<(K, V)> for HashMapWriteTxn<'_, K, V> { fn extend>(&mut self, iter: I) { self.inner.as_mut().extend(iter); } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > HashMapWriteTxn<'_, K, V> { /* pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().hash_key(k) } */ pub(crate) fn get_prehashed(&self, k: &Q, k_hash: u64) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().search(k_hash, k) } /// Retrieve a value from the map. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { let k_hash = self.inner.as_ref().hash_key(k); self.get_prehashed(k, k_hash) } /// Assert if a key exists in the map. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Reset this map to an empty state. As this is within the transaction this /// change only takes effect once commited. Once cleared, you can begin adding /// new writes and changes, again, that will only be visible once commited. pub fn clear(&mut self) { self.inner.as_mut().clear() } /// Insert or update a value by key. If the value previously existed it is returned /// as `Some(V)`. If the value did not previously exist this returns `None`. pub fn insert(&mut self, k: K, v: V) -> Option { // Hash the key. let k_hash = self.inner.as_ref().hash_key(&k); self.inner.as_mut().insert(k_hash, k, v) } /// Remove a key if it exists in the tree. If the value exists, we return it as `Some(V)`, /// and if it did not exist, we return `None` pub fn remove(&mut self, k: &K) -> Option { let k_hash = self.inner.as_ref().hash_key(k); self.inner.as_mut().remove(k_hash, k) } /// Get a mutable reference to a value in the tree. This is correctly, and /// safely cloned before you attempt to mutate the value, isolating it from /// other transactions. pub fn get_mut(&mut self, k: &K) -> Option<&mut V> { let k_hash = self.inner.as_ref().hash_key(k); self.inner.as_mut().get_mut_ref(k_hash, k) } /// Create a read-snapshot of the current map. This does NOT guarantee the map may /// not be mutated during the read, so you MUST guarantee that no functions of the /// write txn are called while this snapshot is active. pub fn to_snapshot(&self) -> HashMapReadSnapshot { HashMapReadSnapshot { inner: SnapshotType::W(self.inner.as_ref()), } } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > HashMapReadTxn<'_, K, V> { pub(crate) fn get_prehashed(&self, k: &Q, k_hash: u64) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.search(k_hash, k) } /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { let k_hash = self.inner.as_ref().hash_key(k); self.get_prehashed(k, k_hash) } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Create a read-snapshot of the current tree. /// As this is the read variant, it IS safe, and guaranteed the tree will not change. pub fn to_snapshot(&self) -> HashMapReadSnapshot { HashMapReadSnapshot { inner: SnapshotType::R(self.inner.as_ref()), } } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > HashMapReadSnapshot<'_, K, V> { /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { match self.inner { SnapshotType::R(inner) => { let k_hash = inner.hash_key(k); inner.search(k_hash, k) } SnapshotType::W(inner) => { let k_hash = inner.hash_key(k); inner.search(k_hash, k) } } } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { match self.inner { SnapshotType::R(inner) => inner.len(), SnapshotType::W(inner) => inner.len(), } } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.len() == 0 } // (adv) range /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { match self.inner { SnapshotType::R(inner) => inner.kv_iter(), SnapshotType::W(inner) => inner.kv_iter(), } } /// Iterator over &K pub fn values(&self) -> ValueIter { match self.inner { SnapshotType::R(inner) => inner.v_iter(), SnapshotType::W(inner) => inner.v_iter(), } } /// Iterator over &V pub fn keys(&self) -> KeyIter { match self.inner { SnapshotType::R(inner) => inner.k_iter(), SnapshotType::W(inner) => inner.k_iter(), } } } concread-0.4.6/src/hashmap/mod.rs000064400000000000000000000204501046102023000147470ustar 00000000000000//! HashMap - A concurrently readable HashMap //! //! This is a specialisation of the `BptreeMap`, allowing a concurrently readable //! HashMap. Unlike a traditional hashmap it does *not* have `O(1)` lookup, as it //! internally uses a tree-like structure to store a series of buckets. However //! if you do not need key-ordering, due to the storage of the hashes as `u64` //! the operations in the tree to seek the bucket is much faster than the use of //! the same key in the `BptreeMap`. //! //! For more details. see the [BptreeMap](crate::bptree::BptreeMap) //! //! This structure is very different to the `im` crate. The `im` crate is //! sync + send over individual operations. This means that multiple writes can //! be interleaved atomicly and safely, and the readers always see the latest //! data. While this is potentially useful to a set of problems, transactional //! structures are suited to problems where readers have to maintain consistent //! data views for a duration of time, cpu cache friendly behaviours and //! database like transaction properties (ACID). #![allow(clippy::implicit_hasher)] #[cfg(feature = "asynch")] pub mod asynch; #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl HashMap { /// Construct a new concurrent hashmap pub fn new() -> Self { // I acknowledge I understand what is required to make this safe. HashMap { inner: LinCowCell::new(unsafe { SuperBlock::new() }), } } /// Initiate a read transaction for the Hashmap, concurrent to any /// other readers or writers. pub fn read(&self) -> HashMapReadTxn { let inner = self.inner.read(); HashMapReadTxn { inner } } /// Initiate a write transaction for the map, exclusive to this /// writer, and concurrently to all existing reads. pub fn write(&self) -> HashMapWriteTxn { let inner = self.inner.write(); HashMapWriteTxn { inner } } /// Attempt to create a new write, returns None if another writer /// already exists. pub fn try_write(&self) -> Option> { self.inner .try_write() .map(|inner| HashMapWriteTxn { inner }) } } impl HashMapWriteTxn<'_, K, V> { /* pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } */ /* pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq, { self.inner.as_ref().hash_key(k) } */ /* /// This is *unsafe* because changing the key CAN and WILL break hashing, which can /// have serious consequences. This API only exists to allow arcache to access the inner /// content of the slot to simplify its API. You should basically never touch this /// function as it's the HashMap equivalent of the demon sphere. pub(crate) unsafe fn get_slot_mut(&mut self, k_hash: u64) -> Option<&mut [Datum]> { self.inner.as_mut().get_slot_mut_ref(k_hash) } */ /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub fn commit(self) { self.inner.commit(); } } /* impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > HashMapReadTxn<'_, K, V> { pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq, { self.inner.as_ref().hash_key(k) } } */ #[cfg(feature = "serde")] impl Serialize for HashMap where K: Serialize + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let txn = self.read(); let mut state = serializer.serialize_map(Some(txn.len()))?; for (key, val) in txn.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for HashMap where K: Deserialize<'de> + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::HashMap; #[test] fn test_hashmap_basic_write() { let hmap: HashMap = HashMap::new(); let mut hmap_write = hmap.write(); hmap_write.insert(10, 10); hmap_write.insert(15, 15); assert!(hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(!hmap_write.contains_key(&20)); assert!(hmap_write.get(&10) == Some(&10)); { let v = hmap_write.get_mut(&10).unwrap(); *v = 11; } assert!(hmap_write.get(&10) == Some(&11)); assert!(hmap_write.remove(&10).is_some()); assert!(!hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(hmap_write.remove(&30).is_none()); hmap_write.clear(); assert!(!hmap_write.contains_key(&10)); assert!(!hmap_write.contains_key(&15)); hmap_write.commit(); } #[test] fn test_hashmap_basic_read_write() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write(); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); hmap_w1.commit(); let hmap_r1 = hmap.read(); assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let mut hmap_w2 = hmap.write(); hmap_w2.insert(20, 20); hmap_w2.commit(); assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let hmap_r2 = hmap.read(); assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[test] fn test_hashmap_basic_read_snapshot() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write(); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); let snap = hmap_w1.to_snapshot(); assert!(snap.contains_key(&10)); assert!(snap.contains_key(&15)); assert!(!snap.contains_key(&20)); } #[test] fn test_hashmap_basic_iter() { let hmap: HashMap = HashMap::new(); let mut hmap_w1 = hmap.write(); assert!(hmap_w1.iter().count() == 0); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); assert!(hmap_w1.iter().count() == 2); } #[test] fn test_hashmap_from_iter() { let hmap: HashMap = vec![(10, 10), (15, 15), (20, 20)].into_iter().collect(); let hmap_r2 = hmap.read(); assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[cfg(feature = "serde")] #[test] fn test_hashmap_serialize_deserialize() { let hmap: HashMap = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&hmap).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let hmap: HashMap = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = hmap.read().iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/hashtrie/asynch.rs000064400000000000000000000145661046102023000156560ustar 00000000000000//! HashTrie - A async locked concurrently readable HashTrie //! //! For more, see [`HashTrie`] #![allow(clippy::implicit_hasher)] #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell_async::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl HashTrie { /// Construct a new concurrent hashtrie pub fn new() -> Self { // I acknowledge I understand what is required to make this safe. HashTrie { inner: LinCowCell::new(unsafe { SuperBlock::new() }), } } /// Initiate a read transaction for the Hashmap, concurrent to any /// other readers or writers. pub async fn read<'x>(&'x self) -> HashTrieReadTxn<'x, K, V> { let inner = self.inner.read().await; HashTrieReadTxn { inner } } /// Initiate a write transaction for the map, exclusive to this /// writer, and concurrently to all existing reads. pub async fn write<'x>(&'x self) -> HashTrieWriteTxn<'x, K, V> { let inner = self.inner.write().await; HashTrieWriteTxn { inner } } /// Attempt to create a new write, returns None if another writer /// already exists. pub fn try_write(&self) -> Option> { self.inner .try_write() .map(|inner| HashTrieWriteTxn { inner }) } } impl HashTrieWriteTxn<'_, K, V> { /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub async fn commit(self) { self.inner.commit().await; } } #[cfg(feature = "serde")] impl Serialize for HashTrieReadTxn<'_, K, V> where K: Serialize + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let mut state = serializer.serialize_map(Some(self.len()))?; for (key, val) in self.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for HashTrie where K: Deserialize<'de> + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::HashTrie; #[tokio::test] async fn test_hashtrie_basic_write() { let hmap: HashTrie = HashTrie::new(); let mut hmap_write = hmap.write().await; hmap_write.insert(10, 10); hmap_write.insert(15, 15); assert!(hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(!hmap_write.contains_key(&20)); assert!(hmap_write.get(&10) == Some(&10)); { let v = hmap_write.get_mut(&10).unwrap(); *v = 11; } assert!(hmap_write.get(&10) == Some(&11)); assert!(hmap_write.remove(&10).is_some()); assert!(!hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(hmap_write.remove(&30).is_none()); hmap_write.clear(); assert!(!hmap_write.contains_key(&10)); assert!(!hmap_write.contains_key(&15)); hmap_write.commit().await; } #[tokio::test] async fn test_hashtrie_basic_read_write() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write().await; hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); hmap_w1.commit().await; let hmap_r1 = hmap.read().await; assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let mut hmap_w2 = hmap.write().await; hmap_w2.insert(20, 20); hmap_w2.commit().await; assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let hmap_r2 = hmap.read().await; assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[tokio::test] async fn test_hashtrie_basic_read_snapshot() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write().await; hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); let snap = hmap_w1.to_snapshot(); assert!(snap.contains_key(&10)); assert!(snap.contains_key(&15)); assert!(!snap.contains_key(&20)); } #[tokio::test] async fn test_hashtrie_basic_iter() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write().await; assert!(hmap_w1.iter().count() == 0); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); assert!(hmap_w1.iter().count() == 2); } #[tokio::test] async fn test_hashtrie_from_iter() { let hmap: HashTrie = vec![(10, 10), (15, 15), (20, 20)].into_iter().collect(); let hmap_r2 = hmap.read().await; assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[cfg(feature = "serde")] #[tokio::test] async fn test_hashtrie_serialize_deserialize() { let hmap: HashTrie = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&hmap.read().await).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let hmap: HashTrie = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = hmap.read().await.iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/hashtrie/impl.rs000064400000000000000000000265271046102023000153320ustar 00000000000000use crate::internals::hashtrie::cursor::CursorReadOps; use crate::internals::hashtrie::cursor::{CursorRead, CursorWrite, SuperBlock}; use crate::internals::hashtrie::iter::*; use std::borrow::Borrow; use crate::internals::lincowcell::LinCowCellCapable; use std::fmt::Debug; use std::hash::Hash; use std::iter::FromIterator; /// A concurrently readable map based on a modified Trie. /// /// /// /// This is a concurrently readable structure, meaning it has transactional /// properties. Writers are serialised (one after the other), and readers /// can exist in parallel with stable views of the structure at a point /// in time. /// /// This is achieved through the use of COW or MVCC. As a write occurs /// subsets of the tree are cloned into the writer thread and then commited /// later. This may cause memory usage to increase in exchange for a gain /// in concurrent behaviour. /// /// Transactions can be rolled-back (aborted) without penalty by dropping /// the `HashTrieWriteTxn` without calling `commit()`. pub struct HashTrie where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCell, CursorRead, CursorWrite>, } unsafe impl Send for HashTrie { } unsafe impl Sync for HashTrie { } /// An active read transaction over a `HashTrie`. The data in this tree /// is guaranteed to not change and will remain consistent for the life /// of this transaction. pub struct HashTrieReadTxn<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellReadTxn<'a, SuperBlock, CursorRead, CursorWrite>, } /// An active write transaction for a `HashTrie`. The data in this tree /// may be modified exclusively through this transaction without affecting /// readers. The write may be rolledback/aborted by dropping this guard /// without calling `commit()`. Once `commit()` is called, readers will be /// able to access and percieve changes in new transactions. pub struct HashTrieWriteTxn<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: LinCowCellWriteTxn<'a, SuperBlock, CursorRead, CursorWrite>, } enum SnapshotType<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { R(&'a CursorRead), W(&'a CursorWrite), } /// A point-in-time snapshot of the tree from within a read OR write. This is /// useful for building other transactional types ontop of this structure, as /// you need a way to downcast both HashTrieReadTxn or HashTrieWriteTxn to /// a singular reader type for a number of get_inner() style patterns. /// /// This snapshot IS safe within the read thread due to the nature of the /// implementation borrowing the inner tree to prevent mutations within the /// same thread while the read snapshot is open. pub struct HashTrieReadSnapshot<'a, K, V> where K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, { inner: SnapshotType<'a, K, V>, } impl Default for HashTrie { fn default() -> Self { Self::new() } } impl FromIterator<(K, V)> for HashTrie { fn from_iter>(iter: I) -> Self { let mut new_sblock = unsafe { SuperBlock::new() }; let prev = new_sblock.create_reader(); let mut cursor = new_sblock.create_writer(); cursor.extend(iter); let _ = new_sblock.pre_commit(cursor, &prev); HashTrie { inner: LinCowCell::new(new_sblock), } } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > Extend<(K, V)> for HashTrieWriteTxn<'_, K, V> { fn extend>(&mut self, iter: I) { self.inner.as_mut().extend(iter); } } impl< K: Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Clone + Sync + Send + 'static, > HashTrieWriteTxn<'_, K, V> { /* pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().hash_key(k) } */ pub(crate) fn get_prehashed(&self, k: &Q, k_hash: u64) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().search(k_hash, k) } /// Retrieve a value from the map. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { let k_hash = self.inner.as_ref().hash_key(k); self.get_prehashed(k, k_hash) } /// Assert if a key exists in the map. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Reset this map to an empty state. As this is within the transaction this /// change only takes effect once commited. Once cleared, you can begin adding /// new writes and changes, again, that will only be visible once commited. pub fn clear(&mut self) { self.inner.as_mut().clear() } /// Insert or update a value by key. If the value previously existed it is returned /// as `Some(V)`. If the value did not previously exist this returns `None`. pub fn insert(&mut self, k: K, v: V) -> Option { // Hash the key. let k_hash = self.inner.as_ref().hash_key(&k); self.inner.as_mut().insert(k_hash, k, v) } /// Remove a key if it exists in the tree. If the value exists, we return it as `Some(V)`, /// and if it did not exist, we return `None` pub fn remove(&mut self, k: &K) -> Option { let k_hash = self.inner.as_ref().hash_key(k); self.inner.as_mut().remove(k_hash, k) } /// Get a mutable reference to a value in the tree. This is correctly, and /// safely cloned before you attempt to mutate the value, isolating it from /// other transactions. pub fn get_mut(&mut self, k: &K) -> Option<&mut V> { let k_hash = self.inner.as_ref().hash_key(k); self.inner.as_mut().get_mut_ref(k_hash, k) } /// Create a read-snapshot of the current map. This does NOT guarantee the map may /// not be mutated during the read, so you MUST guarantee that no functions of the /// write txn are called while this snapshot is active. pub fn to_snapshot(&self) -> HashTrieReadSnapshot { HashTrieReadSnapshot { inner: SnapshotType::W(self.inner.as_ref()), } } } impl HashTrieReadTxn<'_, K, V> { pub(crate) fn get_prehashed(&self, k: &Q, k_hash: u64) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.search(k_hash, k) } /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq+ ?Sized, { let k_hash = self.inner.as_ref().hash_key(k); self.get_prehashed(k, k_hash) } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { self.inner.as_ref().len() } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.inner.as_ref().len() == 0 } /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { self.inner.as_ref().kv_iter() } /// Iterator over &K pub fn values(&self) -> ValueIter { self.inner.as_ref().v_iter() } /// Iterator over &V pub fn keys(&self) -> KeyIter { self.inner.as_ref().k_iter() } /// Create a read-snapshot of the current tree. /// As this is the read variant, it IS safe, and guaranteed the tree will not change. pub fn to_snapshot(&self) -> HashTrieReadSnapshot { HashTrieReadSnapshot { inner: SnapshotType::R(self.inner.as_ref()), } } } impl HashTrieReadSnapshot<'_, K, V> { /// Retrieve a value from the tree. If the value exists, a reference is returned /// as `Some(&V)`, otherwise if not present `None` is returned. pub fn get(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { match self.inner { SnapshotType::R(inner) => { let k_hash = inner.hash_key(k); inner.search(k_hash, k) } SnapshotType::W(inner) => { let k_hash = inner.hash_key(k); inner.search(k_hash, k) } } } /// Assert if a key exists in the tree. pub fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.get(k).is_some() } /// Returns the current number of k:v pairs in the tree pub fn len(&self) -> usize { match self.inner { SnapshotType::R(inner) => inner.len(), SnapshotType::W(inner) => inner.len(), } } /// Determine if the set is currently empty pub fn is_empty(&self) -> bool { self.len() == 0 } // (adv) range /// Iterator over `(&K, &V)` of the set pub fn iter(&self) -> Iter { match self.inner { SnapshotType::R(inner) => inner.kv_iter(), SnapshotType::W(inner) => inner.kv_iter(), } } /// Iterator over &K pub fn values(&self) -> ValueIter { match self.inner { SnapshotType::R(inner) => inner.v_iter(), SnapshotType::W(inner) => inner.v_iter(), } } /// Iterator over &V pub fn keys(&self) -> KeyIter { match self.inner { SnapshotType::R(inner) => inner.k_iter(), SnapshotType::W(inner) => inner.k_iter(), } } } concread-0.4.6/src/hashtrie/mod.rs000064400000000000000000000217611046102023000151430ustar 00000000000000//! HashTrie - A concurrently readable HashTrie //! //! A HashTrie is similar to the Tree based `HashMap`, however instead of //! storing hashes in a tree arrangement, we use Trie behaviours to slice a hash //! into array indexes for accessing the elements. This reduces memory consumed //! as we do not need to store hashes of values in branches, but it can cause //! memory to increase as we don't have node-split behaviour like a BTree //! that backs the HashMap. //! //! Generally, this structure is faster than the HashMap, but at the expense //! that it may consume more memory for it's internal storage. However, even at //! large sizes such as ~16,000,000 entries, this will only consume ~16KB for //! branches. The majority of your space will be taken by your keys and values. //! //! If in doubt, use `HashMap` 😁 //! //! This structure is very different to the `im` crate. The `im` crate is //! sync + send over individual operations. This means that multiple writes can //! be interleaved atomicly and safely, and the readers always see the latest //! data. While this is potentially useful to a set of problems, transactional //! structures are suited to problems where readers have to maintain consistent //! data views for a duration of time, cpu cache friendly behaviours and //! database like transaction properties (ACID). #![allow(clippy::implicit_hasher)] #[cfg(feature = "asynch")] pub mod asynch; #[cfg(feature = "serde")] use serde::{ de::{Deserialize, Deserializer}, ser::{Serialize, SerializeMap, Serializer}, }; #[cfg(feature = "arcache")] use crate::internals::hashtrie::cursor::Datum; #[cfg(feature = "serde")] use crate::utils::MapCollector; use crate::internals::lincowcell::{LinCowCell, LinCowCellReadTxn, LinCowCellWriteTxn}; include!("impl.rs"); impl HashTrie { /// Construct a new concurrent hashtrie pub fn new() -> Self { // I acknowledge I understand what is required to make this safe. HashTrie { inner: LinCowCell::new(unsafe { SuperBlock::new() }), } } /// Initiate a read transaction for the Hashmap, concurrent to any /// other readers or writers. pub fn read(&self) -> HashTrieReadTxn { let inner = self.inner.read(); HashTrieReadTxn { inner } } /// Initiate a write transaction for the map, exclusive to this /// writer, and concurrently to all existing reads. pub fn write(&self) -> HashTrieWriteTxn { let inner = self.inner.write(); HashTrieWriteTxn { inner } } /// Attempt to create a new write, returns None if another writer /// already exists. pub fn try_write(&self) -> Option> { self.inner .try_write() .map(|inner| HashTrieWriteTxn { inner }) } } impl HashTrieWriteTxn<'_, K, V> { #[cfg(feature = "arcache")] pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } #[cfg(feature = "arcache")] pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().hash_key(k) } /// This is *unsafe* because changing the key CAN and WILL break hashing, which can /// have serious consequences. This API only exists to allow arcache to access the inner /// content of the slot to simplify its API. You should basically never touch this /// function as it's the HashTrie equivalent of a the demon sphere. #[cfg(feature = "arcache")] pub(crate) unsafe fn get_slot_mut(&mut self, k_hash: u64) -> Option<&mut [Datum]> { self.inner.as_mut().get_slot_mut_ref(k_hash) } /// Commit the changes from this write transaction. Readers after this point /// will be able to percieve these changes. /// /// To abort (unstage changes), just do not call this function. pub fn commit(self) { self.inner.commit(); } } impl HashTrieReadTxn<'_, K, V> { #[cfg(feature = "arcache")] pub(crate) fn get_txid(&self) -> u64 { self.inner.as_ref().get_txid() } #[cfg(feature = "arcache")] pub(crate) fn prehash(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { self.inner.as_ref().hash_key(k) } } #[cfg(feature = "serde")] impl Serialize for HashTrie where K: Serialize + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Serialize + Clone + Sync + Send + 'static, { fn serialize(&self, serializer: S) -> Result where S: Serializer, { let txn = self.read(); let mut state = serializer.serialize_map(Some(txn.len()))?; for (key, val) in txn.iter() { state.serialize_entry(key, val)?; } state.end() } } #[cfg(feature = "serde")] impl<'de, K, V> Deserialize<'de> for HashTrie where K: Deserialize<'de> + Hash + Eq + Clone + Debug + Sync + Send + 'static, V: Deserialize<'de> + Clone + Sync + Send + 'static, { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { deserializer.deserialize_map(MapCollector::new()) } } #[cfg(test)] mod tests { use super::HashTrie; #[test] fn test_hashtrie_basic_write() { let hmap: HashTrie = HashTrie::new(); let mut hmap_write = hmap.write(); hmap_write.insert(10, 10); hmap_write.insert(15, 15); assert!(hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(!hmap_write.contains_key(&20)); assert!(hmap_write.get(&10) == Some(&10)); { let v = hmap_write.get_mut(&10).unwrap(); *v = 11; } assert!(hmap_write.get(&10) == Some(&11)); assert!(hmap_write.remove(&10).is_some()); assert!(!hmap_write.contains_key(&10)); assert!(hmap_write.contains_key(&15)); assert!(hmap_write.remove(&30).is_none()); hmap_write.clear(); assert!(!hmap_write.contains_key(&10)); assert!(!hmap_write.contains_key(&15)); hmap_write.commit(); } #[test] fn test_hashtrie_basic_read_write() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write(); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); hmap_w1.commit(); let hmap_r1 = hmap.read(); assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let mut hmap_w2 = hmap.write(); hmap_w2.insert(20, 20); hmap_w2.commit(); assert!(hmap_r1.contains_key(&10)); assert!(hmap_r1.contains_key(&15)); assert!(!hmap_r1.contains_key(&20)); let hmap_r2 = hmap.read(); assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[test] fn test_hashtrie_basic_read_snapshot() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write(); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); let snap = hmap_w1.to_snapshot(); assert!(snap.contains_key(&10)); assert!(snap.contains_key(&15)); assert!(!snap.contains_key(&20)); } #[test] fn test_hashtrie_basic_iter() { let hmap: HashTrie = HashTrie::new(); let mut hmap_w1 = hmap.write(); assert!(hmap_w1.iter().count() == 0); hmap_w1.insert(10, 10); hmap_w1.insert(15, 15); assert!(hmap_w1.iter().count() == 2); } #[test] fn test_hashtrie_from_iter() { let hmap: HashTrie = vec![(10, 10), (15, 15), (20, 20)].into_iter().collect(); let hmap_r2 = hmap.read(); assert!(hmap_r2.contains_key(&10)); assert!(hmap_r2.contains_key(&15)); assert!(hmap_r2.contains_key(&20)); } #[test] fn test_hashtrie_double_free() { let hmap: HashTrie = HashTrie::new(); let mut tx = hmap.write(); for _i in 0..2 { tx.insert(13, 34); tx.remove(&13); } } #[cfg(feature = "serde")] #[test] fn test_hashtrie_serialize_deserialize() { let hmap: HashTrie = vec![(10, 11), (15, 16), (20, 21)].into_iter().collect(); let value = serde_json::to_value(&hmap).unwrap(); assert_eq!(value, serde_json::json!({ "10": 11, "15": 16, "20": 21 })); let hmap: HashTrie = serde_json::from_value(value).unwrap(); let mut vec: Vec<(usize, usize)> = hmap.read().iter().map(|(k, v)| (*k, *v)).collect(); vec.sort_unstable(); assert_eq!(vec, [(10, 11), (15, 16), (20, 21)]); } } concread-0.4.6/src/internals/bptree/cursor.rs000064400000000000000000002745611046102023000173620ustar 00000000000000// The cursor is what actually knits a tree together from the parts // we have, and has an important role to keep the system consistent. // // Additionally, the cursor also is responsible for general movement // throughout the structure and how to handle that effectively use super::node::*; use crate::internals::lincowcell::LinCowCellCapable; use std::borrow::Borrow; use std::fmt::Debug; use std::mem; use super::iter::{Iter, KeyIter, RangeIter, ValueIter}; use super::mutiter::RangeMutIter; use super::states::*; use std::ops::RangeBounds; use std::sync::Mutex; /// The internal root of the tree, with associated garbage lists etc. #[derive(Debug)] pub(crate) struct SuperBlock where K: Ord + Clone + Debug, V: Clone, { root: *mut Node, size: usize, txid: u64, } unsafe impl Send for SuperBlock { } unsafe impl Sync for SuperBlock { } impl LinCowCellCapable, CursorWrite> for SuperBlock { fn create_reader(&self) -> CursorRead { // This sets up the first reader. CursorRead::new(self) } fn create_writer(&self) -> CursorWrite { // Create a writer. CursorWrite::new(self) } fn pre_commit( &mut self, mut new: CursorWrite, prev: &CursorRead, ) -> CursorRead { let mut prev_last_seen = prev.last_seen.lock().unwrap(); debug_assert!((*prev_last_seen).is_empty()); let new_last_seen = &mut new.last_seen; // swap the two lists. We should now have "empty" std::mem::swap(&mut (*prev_last_seen), &mut (*new_last_seen)); debug_assert!((*new_last_seen).is_empty()); // Now when the lock is dropped, both sides see the correct info and garbage for drops. // We are done, time to seal everything. new.first_seen.iter().for_each(|n| unsafe { (**n).make_ro(); }); // Clear first seen, we won't be dropping them from here. new.first_seen.clear(); // == Push data into our sb. == self.root = new.root; self.size = new.length; self.txid = new.txid; // Create the new reader. CursorRead::new(self) } } impl SuperBlock { /// This is UNSAFE because you *MUST* understand how to manage the transactions /// of this type and to give a correct linearised transaction manager the ability /// to control this. /// /// More than likely, you WILL NOT do this so you should RUN AWAY and try to forget /// you ever saw this function at all. pub unsafe fn new() -> Self { let leaf: *mut Leaf = Node::new_leaf(1); SuperBlock { root: leaf as *mut Node, size: 0, txid: 1, } } #[cfg(test)] pub(crate) fn new_test(txid: u64, root: *mut Node) -> Self { assert!(txid < (TXID_MASK >> TXID_SHF)); assert!(txid > 0); // let last_seen: Vec<*mut Node> = Vec::with_capacity(16); let mut first_seen = Vec::with_capacity(16); // Do a pre-verify to be sure it's sane. assert!(unsafe { (*root).verify() }); // Collect anythinng from root into this txid if needed. // Set txid to txid on all tree nodes from the root. first_seen.push(root); unsafe { (*root).sblock_collect(&mut first_seen) }; // Lock them all first_seen.iter().for_each(|n| unsafe { (**n).make_ro(); }); // Determine our count internally. let (length, _) = unsafe { (*root).tree_density() }; // Good to go! SuperBlock { txid, size: length, root, } } } #[derive(Debug)] pub(crate) struct CursorRead where K: Ord + Clone + Debug, V: Clone, { txid: u64, length: usize, root: *mut Node, last_seen: Mutex>>, } unsafe impl Send for CursorRead { } unsafe impl Sync for CursorRead { } #[derive(Debug)] pub(crate) struct CursorWrite where K: Ord + Clone + Debug, V: Clone, { txid: u64, length: usize, root: *mut Node, last_seen: Vec<*mut Node>, first_seen: Vec<*mut Node>, } unsafe impl Send for CursorWrite { } unsafe impl Sync for CursorWrite { } pub(crate) trait CursorReadOps { #[allow(unused)] fn get_root_ref(&self) -> &Node; fn get_root(&self) -> *mut Node; fn len(&self) -> usize; fn get_txid(&self) -> u64; #[cfg(test)] fn get_tree_density(&self) -> (usize, usize) { // Walk the tree and calculate the packing effeciency. let rref = self.get_root_ref(); rref.tree_density() } fn search(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord + ?Sized, { let mut node = self.get_root(); for _i in 0..65536 { if unsafe { (*node).is_leaf() } { let lref = leaf_ref!(node, K, V); return lref.get_ref(k).map(|v| unsafe { // Strip the lifetime and rebind to the lifetime of `self`. // This is safe because we know that these nodes will NOT // be altered during the lifetime of this txn, so the references // will remain stable. let x = v as *const V; &*x as &V }); } else { let bref = branch_ref!(node, K, V); let idx = bref.locate_node(k); node = bref.get_idx_unchecked(idx); } } panic!("Tree depth exceeded max limit (65536). This may indicate memory corruption."); } fn contains_key(&self, k: &Q) -> bool where K: Borrow, Q: Ord + ?Sized, { self.search(k).is_some() } fn first_key_value(&self) -> Option<(&K, &V)> { let mut node = self.get_root(); for _i in 0..65536 { if unsafe { (*node).is_leaf() } { let lref = leaf_ref!(node, K, V); return lref.min_value(); } else { let bref = branch_ref!(node, K, V); node = bref.min_node(); } } panic!("Tree depth exceeded max limit (65536). This may indicate memory corruption."); } fn last_key_value(&self) -> Option<(&K, &V)> { let mut node = self.get_root(); for _i in 0..65536 { if unsafe { (*node).is_leaf() } { let lref = leaf_ref!(node, K, V); return lref.max_value(); } else { let bref = branch_ref!(node, K, V); node = bref.max_node(); } } panic!("Tree depth exceeded max limit (65536). This may indicate memory corruption."); } fn range<'n, R, T>(&'n self, range: R) -> RangeIter<'n, '_, K, V> where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { RangeIter::new(self.get_root(), range, self.len()) } fn kv_iter<'n>(&'n self) -> Iter<'n, '_, K, V> { Iter::new(self.get_root(), self.len()) } fn k_iter<'n>(&'n self) -> KeyIter<'n, '_, K, V> { KeyIter::new(self.get_root(), self.len()) } fn v_iter<'n>(&'n self) -> ValueIter<'n, '_, K, V> { ValueIter::new(self.get_root(), self.len()) } #[cfg(test)] fn verify(&self) -> bool { self.get_root_ref().no_cycles() && self.get_root_ref().verify() && { let (l, _) = self.get_tree_density(); l == self.len() } } } impl CursorWrite { pub(crate) fn new(sblock: &SuperBlock) -> Self { let txid = sblock.txid + 1; assert!(txid < (TXID_MASK >> TXID_SHF)); // println!("starting wr txid -> {:?}", txid); let length = sblock.size; let root = sblock.root; // TODO: Could optimise how big these are based // on past trends? Or based on % tree size? let last_seen = Vec::with_capacity(16); let first_seen = Vec::with_capacity(16); CursorWrite { txid, length, root, last_seen, first_seen, } } pub(crate) fn clear(&mut self) { // Reset the values in this tree. // We need to mark everything as disposable, and create a new root! self.last_seen.push(self.root); unsafe { (*self.root).sblock_collect(&mut self.last_seen) }; let nroot: *mut Leaf = Node::new_leaf(self.txid); let mut nroot = nroot as *mut Node; self.first_seen.push(nroot); mem::swap(&mut self.root, &mut nroot); self.length = 0; } // Functions as insert_or_update pub(crate) fn insert(&mut self, k: K, v: V) -> Option { let r = match clone_and_insert( self.root, self.txid, k, v, &mut self.last_seen, &mut self.first_seen, ) { CRInsertState::NoClone(res) => res, CRInsertState::Clone(res, mut nnode) => { // We have a new root node, swap it in. // !!! It's already been cloned and marked for cleaning by the clone_and_insert // call. // eprintln!("swap: {:?}, {:?}", self.root, nnode); mem::swap(&mut self.root, &mut nnode); // Return the insert result res } CRInsertState::CloneSplit(lnode, rnode) => { // The previous root had to split - make a new // root now and put it inplace. let mut nroot = Node::new_branch(self.txid, lnode, rnode) as *mut Node; self.first_seen.push(nroot); // The root was cloned as part of clone split // This swaps the POINTERS not the content! mem::swap(&mut self.root, &mut nroot); // As we split, there must NOT have been an existing // key to overwrite. None } CRInsertState::Split(rnode) => { // The previous root was already part of this txn, but has now // split. We need to construct a new root and swap them. // // Note, that we have to briefly take an extra RC on the root so // that we can get it into the branch. let mut nroot = Node::new_branch(self.txid, self.root, rnode) as *mut Node; self.first_seen.push(nroot); // println!("ls push 2"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); // As we split, there must NOT have been an existing // key to overwrite. None } CRInsertState::RevSplit(lnode) => { let mut nroot = Node::new_branch(self.txid, lnode, self.root) as *mut Node; self.first_seen.push(nroot); // println!("ls push 3"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); None } CRInsertState::CloneRevSplit(rnode, lnode) => { let mut nroot = Node::new_branch(self.txid, lnode, rnode) as *mut Node; self.first_seen.push(nroot); // root was cloned in the rev split // println!("ls push 4"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); None } }; // If this is none, it means a new slot is now occupied. if r.is_none() { self.length += 1; } r } pub(crate) fn remove(&mut self, k: &K) -> Option { let r = match clone_and_remove( self.root, self.txid, k, &mut self.last_seen, &mut self.first_seen, ) { CRRemoveState::NoClone(res) => res, CRRemoveState::Clone(res, mut nnode) => { mem::swap(&mut self.root, &mut nnode); res } CRRemoveState::Shrink(res) => { if self_meta!(self.root).is_leaf() { // No action - we have an empty tree. res } else { // Root is being demoted, get the last branch and // promote it to the root. self.last_seen.push(self.root); let rmut = branch_ref!(self.root, K, V); let mut pnode = rmut.extract_last_node(); mem::swap(&mut self.root, &mut pnode); res } } CRRemoveState::CloneShrink(res, mut nnode) => { if self_meta!(nnode).is_leaf() { // The tree is empty, but we cloned the root to get here. mem::swap(&mut self.root, &mut nnode); res } else { // Our root is getting demoted here, get the remaining branch self.last_seen.push(nnode); let rmut = branch_ref!(nnode, K, V); let mut pnode = rmut.extract_last_node(); // Promote it to the new root mem::swap(&mut self.root, &mut pnode); res } } }; if r.is_some() { self.length -= 1; } r } #[cfg(test)] pub(crate) fn path_clone(&mut self, k: &K) { match path_clone( self.root, self.txid, k, &mut self.last_seen, &mut self.first_seen, ) { CRCloneState::Clone(mut nroot) => { // We cloned the root, so swap it. mem::swap(&mut self.root, &mut nroot); } CRCloneState::NoClone => {} }; } pub(crate) fn get_mut_ref(&mut self, k: &K) -> Option<&mut V> { match path_clone( self.root, self.txid, k, &mut self.last_seen, &mut self.first_seen, ) { CRCloneState::Clone(mut nroot) => { // We cloned the root, so swap it. mem::swap(&mut self.root, &mut nroot); } CRCloneState::NoClone => {} }; // Now get the ref. path_get_mut_ref(self.root, k) } pub(crate) fn split_off_lt(&mut self, k: &K) { /* // Remove all the values less than from the top of the tree. loop { let result = clone_and_split_off_trim_lt( self.root, self.txid, k, &mut self.last_seen, &mut self.first_seen, ); // println!("clone_and_split_off_trim_lt -> {:?}", result); match result { CRTrimState::Complete => break, CRTrimState::Clone(mut nroot) => { // We cloned the root as we changed it, but don't need // to recurse so we break the loop. mem::swap(&mut self.root, &mut nroot); break; } CRTrimState::Promote(mut nroot) => { mem::swap(&mut self.root, &mut nroot); // This will continue and try again. } } } */ /* // Now work up the tree and clean up the remaining path inbetween let result = clone_and_split_off_prune_lt(&mut self.root, self.txid, k); // println!("clone_and_split_off_prune_lt -> {:?}", result); match result { CRPruneState::OkNoClone => {} CRPruneState::OkClone(mut nroot) => { mem::swap(&mut self.root, &mut nroot); } CRPruneState::Prune => { if self.root.is_leaf() { // No action, the tree is now empty. } else { // Root is being demoted, get the last branch and // promote it to the root. let rmut = Arc::get_mut(&mut self.root).unwrap().as_mut_branch(); let mut pnode = rmut.extract_last_node(); mem::swap(&mut self.root, &mut pnode); } } CRPruneState::ClonePrune(mut clone) => { if self.root.is_leaf() { mem::swap(&mut self.root, &mut clone); } else { let rmut = Arc::get_mut(&mut clone).unwrap().as_mut_branch(); let mut pnode = rmut.extract_last_node(); mem::swap(&mut self.root, &mut pnode); } } }; */ // Get rid of anything else dangling let mut rmkeys: Vec = Vec::new(); for ki in self.k_iter() { if ki >= k { break; } rmkeys.push(ki.clone()); } for kr in rmkeys.into_iter() { let _ = self.remove(&kr); } // Iterate over the remaining kv's to fix our k,v count. let newsize = self.kv_iter().count(); self.length = newsize; } #[cfg(test)] pub(crate) fn root_txid(&self) -> u64 { self.get_root_ref().get_txid() } #[cfg(test)] pub(crate) fn tree_density(&self) -> (usize, usize) { self.get_root_ref().tree_density() } pub(crate) fn range_mut<'n, R, T>(&'n mut self, range: R) -> RangeMutIter<'n, '_, K, V> where K: Borrow, T: Ord + ?Sized, R: RangeBounds, { RangeMutIter::new(self, range) } } impl Extend<(K, V)> for CursorWrite { fn extend>(&mut self, iter: I) { iter.into_iter().for_each(|(k, v)| { let _ = self.insert(k, v); }); } } impl Drop for CursorWrite { fn drop(&mut self) { // If there is content in first_seen, this means we aborted and must rollback // of these items! // println!("Releasing CW FS -> {:?}", self.first_seen); self.first_seen.iter().for_each(|n| Node::free(*n)) } } impl Drop for CursorRead { fn drop(&mut self) { // If there is content in last_seen, a future generation wants us to remove it! let last_seen_guard = self .last_seen .try_lock() .expect("Unable to lock, something is horridly wrong!"); last_seen_guard.iter().for_each(|n| Node::free(*n)); std::mem::drop(last_seen_guard); } } impl Drop for SuperBlock { fn drop(&mut self) { // eprintln!("Releasing SuperBlock ..."); // We must be the last SB and no txns exist. Drop the tree now. // TODO: Calc this based on size. let mut first_seen = Vec::with_capacity(16); // eprintln!("{:?}", self.root); first_seen.push(self.root); unsafe { (*self.root).sblock_collect(&mut first_seen) }; first_seen.iter().for_each(|n| Node::free(*n)); } } impl CursorRead { pub(crate) fn new(sblock: &SuperBlock) -> Self { // println!("starting rd txid -> {:?}", sblock.txid); CursorRead { txid: sblock.txid, length: sblock.size, root: sblock.root, last_seen: Mutex::new(Vec::with_capacity(0)), } } } impl CursorReadOps for CursorRead { fn get_root_ref(&self) -> &Node { unsafe { &*(self.root) } } fn get_root(&self) -> *mut Node { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } } impl CursorReadOps for CursorWrite { fn get_root_ref(&self) -> &Node { unsafe { &*(self.root) } } fn get_root(&self) -> *mut Node { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } } fn clone_and_insert( node: *mut Node, txid: u64, k: K, v: V, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRInsertState { /* * Let's talk about the magic of this function. Come, join * me around the [🔥🔥🔥] * * This function is the heart and soul of a copy on write * structure - as we progress to the leaf location where we * wish to perform an alteration, we clone (if required) all * nodes on the path. This way an abort (rollback) of the * commit simply is to drop the cursor, where the "new" * cloned values are only referenced. To commit, we only need * to replace the tree root in the parent structures as * the cloned path must by definition include the root, and * will contain references to nodes that did not need cloning, * thus keeping them alive. */ if self_meta!(node).is_leaf() { // NOTE: We have to match, rather than map here, as rust tries to // move k:v into both closures! // Leaf path match leaf_ref!(node, K, V).req_clone(txid) { Some(cnode) => { // println!(); first_seen.push(cnode); // println!("ls push 5"); last_seen.push(node); // Clone was required. let mref = leaf_ref!(cnode, K, V); // insert to the new node. match mref.insert_or_update(k, v) { LeafInsertState::Ok(res) => CRInsertState::Clone(res, cnode), LeafInsertState::Split(rnode) => { first_seen.push(rnode as *mut Node); // let rnode = Node::new_leaf_ins(txid, sk, sv); CRInsertState::CloneSplit(cnode, rnode as *mut Node) } LeafInsertState::RevSplit(lnode) => { first_seen.push(lnode as *mut Node); CRInsertState::CloneRevSplit(cnode, lnode as *mut Node) } } } None => { // No clone required. // simply do the insert. let mref = leaf_ref!(node, K, V); match mref.insert_or_update(k, v) { LeafInsertState::Ok(res) => CRInsertState::NoClone(res), LeafInsertState::Split(rnode) => { // We split, but left is already part of the txn group, so lets // just return what's new. // let rnode = Node::new_leaf_ins(txid, sk, sv); first_seen.push(rnode as *mut Node); CRInsertState::Split(rnode as *mut Node) } LeafInsertState::RevSplit(lnode) => { first_seen.push(lnode as *mut Node); CRInsertState::RevSplit(lnode as *mut Node) } } } } // end match } else { // Branch path // Decide if we need to clone - we do this as we descend due to a quirk in Arc // get_mut, because we don't have access to get_mut_unchecked (and this api may // never be stabilised anyway). When we change this to *mut + garbage lists we // could consider restoring the reactive behaviour that clones up, rather than // cloning down the path. // // NOTE: We have to match, rather than map here, as rust tries to // move k:v into both closures! match branch_ref!(node, K, V).req_clone(txid) { Some(cnode) => { first_seen.push(cnode); last_seen.push(node); // Not same txn, clone instead. let nmref = branch_ref!(cnode, K, V); let anode_idx = nmref.locate_node(&k); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_insert(anode, txid, k, v, last_seen, first_seen) { CRInsertState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); // Pass back up that we cloned. CRInsertState::Clone(res, cnode) } CRInsertState::CloneSplit(lnode, rnode) => { // CloneSplit here, would have already updated lnode/rnode into the // gc lists. // Second, we update anode_idx node with our lnode as the new clone. nmref.replace_by_idx(anode_idx, lnode); // Third we insert rnode - perfect world it's at anode_idx + 1, but // we use the normal insert routine for now. match nmref.add_node(rnode) { BranchInsertState::Ok => CRInsertState::Clone(None, cnode), BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::CloneSplit(cnode, nrnode as *mut Node) } } } CRInsertState::CloneRevSplit(nnode, lnode) => { nmref.replace_by_idx(anode_idx, nnode); match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::Clone(None, cnode), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::CloneSplit(cnode, nrnode as *mut Node) } } } CRInsertState::NoClone(_res) => { // If our descendant did not clone, then we don't have to either. unreachable!("Shoud never be possible."); // CRInsertState::NoClone(res) } CRInsertState::Split(_rnode) => { // I think unreachable!("This represents a corrupt tree state"); } CRInsertState::RevSplit(_lnode) => { unreachable!("This represents a corrupt tree state"); } } // end match } // end Some, None => { let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(&k); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_insert(anode, txid, k, v, last_seen, first_seen) { CRInsertState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); // We did not clone, and no further work needed. CRInsertState::NoClone(res) } CRInsertState::NoClone(res) => { // If our descendant did not clone, then we don't have to do any adjustments // or further work. CRInsertState::NoClone(res) } CRInsertState::Split(rnode) => { match nmref.add_node(rnode) { // Similar to CloneSplit - we are either okay, and the insert was happy. BranchInsertState::Ok => CRInsertState::NoClone(None), // Or *we* split as well, and need to return a new sibling branch. BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::Split(nrnode as *mut Node) } } } CRInsertState::CloneSplit(lnode, rnode) => { // work inplace. // Second, we update anode_idx node with our lnode as the new clone. nmref.replace_by_idx(anode_idx, lnode); // Third we insert rnode - perfect world it's at anode_idx + 1, but // we use the normal insert routine for now. match nmref.add_node(rnode) { // Similar to CloneSplit - we are either okay, and the insert was happy. BranchInsertState::Ok => CRInsertState::NoClone(None), // Or *we* split as well, and need to return a new sibling branch. BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::Split(nrnode as *mut Node) } } } CRInsertState::RevSplit(lnode) => match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::NoClone(None), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::Split(nrnode as *mut Node) } }, CRInsertState::CloneRevSplit(nnode, lnode) => { nmref.replace_by_idx(anode_idx, nnode); match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::NoClone(None), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::Split(nrnode as *mut Node) } } } } // end match } } // end match branch ref clone } // end if leaf } fn path_clone( node: *mut Node, txid: u64, k: &K, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRCloneState { if unsafe { (*node).is_leaf() } { unsafe { (*(node as *mut Leaf)) .req_clone(txid) .map(|cnode| { // Track memory last_seen.push(node); // println!("ls push 7 {:?}", node); first_seen.push(cnode); CRCloneState::Clone(cnode) }) .unwrap_or(CRCloneState::NoClone) } } else { // We are in a branch, so locate our descendent and prepare // to clone if needed. // println!("txid -> {:?} {:?}", node_txid, txid); let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(k); let anode = nmref.get_idx_unchecked(anode_idx); match path_clone(anode, txid, k, last_seen, first_seen) { CRCloneState::Clone(cnode) => { // Do we need to clone? nmref .req_clone(txid) .map(|acnode| { // We require to be cloned. last_seen.push(node); // println!("ls push 8"); first_seen.push(acnode); let nmref = branch_ref!(acnode, K, V); nmref.replace_by_idx(anode_idx, cnode); CRCloneState::Clone(acnode) }) .unwrap_or_else(|| { // Nope, just insert and unwind. nmref.replace_by_idx(anode_idx, cnode); CRCloneState::NoClone }) } CRCloneState::NoClone => { // Did not clone, unwind. CRCloneState::NoClone } } } } fn clone_and_remove( node: *mut Node, txid: u64, k: &K, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRRemoveState { if self_meta!(node).is_leaf() { leaf_ref!(node, K, V) .req_clone(txid) .map(|cnode| { first_seen.push(cnode); // println!("ls push 10 {:?}", node); last_seen.push(node); let mref = leaf_ref!(cnode, K, V); match mref.remove(k) { LeafRemoveState::Ok(res) => CRRemoveState::Clone(res, cnode), LeafRemoveState::Shrink(res) => CRRemoveState::CloneShrink(res, cnode), } }) .unwrap_or_else(|| { let mref = leaf_ref!(node, K, V); match mref.remove(k) { LeafRemoveState::Ok(res) => CRRemoveState::NoClone(res), LeafRemoveState::Shrink(res) => CRRemoveState::Shrink(res), } }) } else { // Locate the node we need to work on and then react if it // requests a shrink. branch_ref!(node, K, V) .req_clone(txid) .map(|cnode| { first_seen.push(cnode); // println!("ls push 11 {:?}", node); last_seen.push(node); // Done mm let nmref = branch_ref!(cnode, K, V); let anode_idx = nmref.locate_node(k); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_remove(anode, txid, k, last_seen, first_seen) { CRRemoveState::NoClone(_res) => { unreachable!("Should never occur"); // CRRemoveState::NoClone(res) } CRRemoveState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); CRRemoveState::Clone(res, cnode) } CRRemoveState::Shrink(_res) => { unreachable!("This represents a corrupt tree state"); } CRRemoveState::CloneShrink(res, nnode) => { // Put our cloned child into the tree at the correct location, don't worry, // the shrink_decision will deal with it. nmref.replace_by_idx(anode_idx, nnode); // Now setup the sibling, to the left *or* right. let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); // Okay, now work out what we need to do. match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::Clone(res, cnode) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // println!("ls push 20 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Clone(res, cnode) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values. // println!("ls push 21 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::CloneShrink(res, cnode) } } } } }) .unwrap_or_else(|| { // We are already part of this txn let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(k); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_remove(anode, txid, k, last_seen, first_seen) { CRRemoveState::NoClone(res) => CRRemoveState::NoClone(res), CRRemoveState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); CRRemoveState::NoClone(res) } CRRemoveState::Shrink(res) => { let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::NoClone(res) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // // A quirk here is based on how clone_sibling_idx works. We may actually // start with anode_idx of 0, which triggers a right clone, so it's // *already* in the mm lists. But here right is "last seen" now if // // println!("ls push 22 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::NoClone(res) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values, so we begin to shrink up. // println!("ls push 23 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Shrink(res) } } } CRRemoveState::CloneShrink(res, nnode) => { // We don't need to clone, just work on the nmref we have. // // Swap in the cloned node to the correct location. nmref.replace_by_idx(anode_idx, nnode); // Now setup the sibling, to the left *or* right. let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::NoClone(res) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // println!("ls push 24 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::NoClone(res) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values. // println!("ls push 25 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Shrink(res) } } } } }) // end unwrap_or_else } } fn path_get_mut_ref<'a, K, V>(node: *mut Node, k: &K) -> Option<&'a mut V> where K: Clone + Ord + Debug + 'a, V: Clone, { if self_meta!(node).is_leaf() { leaf_ref!(node, K, V).get_mut_ref(k) } else { // This nmref binds the life of the reference ... let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(k); let anode = nmref.get_idx_unchecked(anode_idx); // That we get here. So we can't just return it, and we need to 'strip' the // lifetime so that it's bound to the lifetime of the outer node // rather than the nmref. let r: Option<*mut V> = path_get_mut_ref(anode, k).map(|v| v as *mut V); // I solemly swear I am up to no good. r.map(|v| unsafe { &mut *v as &mut V }) } } /* fn clone_and_split_off_trim_lt( node: *mut Node, txid: u64, k: &K, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRTrimState { if self_meta!(node).is_leaf() { // No action, it's a leaf. Prune will do it. CRTrimState::Complete } else { branch_ref!(node, K, V) .req_clone(txid) .map(|cnode| { let nmref = branch_ref!(cnode, K, V); first_seen.push(cnode as *mut Node); last_seen.push(node as *mut Node); match nmref.trim_lt_key(k, last_seen, first_seen) { BranchTrimState::Complete => CRTrimState::Clone(cnode), BranchTrimState::Promote(pnode) => { // We just cloned it but oh well, away you go ... last_seen.push(cnode as *mut Node); CRTrimState::Promote(pnode) } } }) .unwrap_or_else(|| { let nmref = branch_ref!(node, K, V); match nmref.trim_lt_key(k, last_seen, first_seen) { BranchTrimState::Complete => CRTrimState::Complete, BranchTrimState::Promote(pnode) => { // We are about to remove our node, so mark it as the last time. last_seen.push(node); CRTrimState::Promote(pnode) } } }) } } */ /* fn clone_and_split_off_prune_lt( node: &mut ABNode, txid: usize, k: &K, ) -> CRPruneState { if node.is_leaf() { // I think this should be do nothing, the up walk will clean. if node.txid == txid { let nmref = Arc::get_mut(node).unwrap().as_mut_leaf(); match nmref.remove_lt(k) { LeafPruneState::Ok => CRPruneState::OkNoClone, LeafPruneState::Prune => CRPruneState::Prune, } } else { let mut cnode = node.req_clone(txid); let nmref = Arc::get_mut(&mut cnode).unwrap().as_mut_leaf(); match nmref.remove_lt(k) { LeafPruneState::Ok => CRPruneState::OkClone(cnode), LeafPruneState::Prune => CRPruneState::ClonePrune(cnode), } } } else { if node.txid == txid { let nmref = Arc::get_mut(node).unwrap().as_mut_branch(); let anode_idx = nmref.locate_node(&k); let anode = nmref.get_idx_unchecked(anode_idx); let result = clone_and_split_off_prune_lt(anode, txid, k); // println!("== clone_and_split_off_prune_lt --> {:?}", result); match result { CRPruneState::OkNoClone => { match nmref.prune(anode_idx) { Ok(_) => { // Okay, the branch remains valid, return that we are okay, and // no clone is needed. CRPruneState::OkNoClone } Err(_) => CRPruneState::Prune, } } CRPruneState::OkClone(clone) => { // Our child cloned, so replace it. nmref.replace_by_idx(anode_idx, clone); // Check our node for anything else to be removed. match nmref.prune(anode_idx) { Ok(_) => { // Okay, the branch remains valid, return that we are okay, and // no clone is needed. CRPruneState::OkNoClone } Err(_) => CRPruneState::Prune, } } CRPruneState::Prune => { match nmref.prune_decision(txid, anode_idx) { Ok(_) => { // Okay, the branch remains valid. Now we need to trim any // excess if possible. CRPruneState::OkNoClone } Err(_) => CRPruneState::Prune, } } CRPruneState::ClonePrune(clone) => { // Our child cloned, and intends to be removed. nmref.replace_by_idx(anode_idx, clone); // Now make the prune decision. match nmref.prune_decision(txid, anode_idx) { Ok(_) => { // Okay, the branch remains valid. Now we need to trim any // excess if possible. CRPruneState::OkNoClone } Err(_) => CRPruneState::Prune, } } } } else { let mut cnode = node.req_clone(txid); let nmref = Arc::get_mut(&mut cnode).unwrap().as_mut_branch(); let anode_idx = nmref.locate_node(&k); let anode = nmref.get_idx_unchecked(anode_idx); let result = clone_and_split_off_prune_lt(anode, txid, k); // println!("!= clone_and_split_off_prune_lt --> {:?}", result); match result { CRPruneState::OkNoClone => { // I think this is an impossible state - how can a child be in the // txid if we are not? unreachable!("Impossible tree state") } CRPruneState::OkClone(clone) => { // Our child cloned, so replace it. nmref.replace_by_idx(anode_idx, clone); // Check our node for anything else to be removed. match nmref.prune(anode_idx) { Ok(_) => { // Okay, the branch remains valid, return that we are okay. CRPruneState::OkClone(cnode) } Err(_) => CRPruneState::ClonePrune(cnode), } } CRPruneState::Prune => { unimplemented!(); } CRPruneState::ClonePrune(clone) => { // Our child cloned, and intends to be removed. nmref.replace_by_idx(anode_idx, clone); // Now make the prune decision. match nmref.prune_decision(txid, anode_idx) { Ok(_) => { // Okay, the branch remains valid. Now we need to trim any // excess if possible. CRPruneState::OkClone(cnode) } Err(_) => CRPruneState::ClonePrune(cnode), } } // end clone prune } // end match result } } } */ #[cfg(test)] mod tests { use super::super::node::*; use super::super::states::*; use super::SuperBlock; use super::{CursorRead, CursorReadOps}; use crate::internals::lincowcell::LinCowCellCapable; use rand::seq::SliceRandom; use std::mem; fn create_leaf_node(v: usize) -> *mut Node { let node = Node::new_leaf(1); { let nmut: &mut Leaf<_, _> = leaf_ref!(node, usize, usize); nmut.insert_or_update(v, v); } node as *mut Node } fn create_leaf_node_full(vbase: usize) -> *mut Node { assert!(vbase % 10 == 0); let node = Node::new_leaf(1); { let nmut = leaf_ref!(node, usize, usize); for idx in 0..L_CAPACITY { let v = vbase + idx; nmut.insert_or_update(v, v); } // println!("lnode full {:?} -> {:?}", vbase, nmut); } node as *mut Node } fn create_branch_node_full(vbase: usize) -> *mut Node { let l1 = create_leaf_node(vbase); let l2 = create_leaf_node(vbase + 10); let lbranch = Node::new_branch(1, l1, l2); let bref = branch_ref!(lbranch, usize, usize); for i in 2..BV_CAPACITY { let l = create_leaf_node(vbase + (10 * i)); let r = bref.add_node(l); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), } } assert!(bref.count() == L_CAPACITY); lbranch as *mut Node } #[test] fn test_bptree2_cursor_insert_leaf() { // First create the node + cursor let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); eprintln!("{:?}", wcurs); let prev_txid = wcurs.root_txid(); eprintln!("prev_txid {:?}", prev_txid); // Now insert - the txid should be different. let r = wcurs.insert(1, 1); assert!(r.is_none()); eprintln!("get_root_ref {:?}", wcurs.get_root_ref().meta.get_txid()); let r1_txid = wcurs.root_txid(); assert!(r1_txid == prev_txid + 1); // Now insert again - the txid should be the same. let r = wcurs.insert(2, 2); assert!(r.is_none()); let r2_txid = wcurs.root_txid(); assert!(r2_txid == r1_txid); // The clones worked as we wanted! assert!(wcurs.verify()); } #[test] fn test_bptree2_cursor_insert_split_1() { // Given a leaf at max, insert such that: // // leaf // // leaf -> split leaf // // // root // / \ // leaf split leaf // // It's worth noting that this is testing the CloneSplit path // as leaf needs a clone AND to split to achieve the new root. let node = create_leaf_node_full(10); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); let prev_txid = wcurs.root_txid(); let r = wcurs.insert(1, 1); assert!(r.is_none()); let r1_txid = wcurs.root_txid(); assert!(r1_txid == prev_txid + 1); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_2() { // Similar to split_1, but test the Split only path. This means // leaf needs to be below max to start, and we insert enough in-txn // to trigger a clone of leaf AND THEN to cause the split. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(L_CAPACITY + 1) { // println!("ITER v {}", v); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_3() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split inbetween l/sl (new txn) let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // println!("{:?}", wcurs); let r = wcurs.insert(19, 19); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_4() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split of sl (new txn) // let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(29, 29); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_5() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split inbetween l/sl (same txn) // let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Now insert to trigger the needed actions. // Remember, we only need L_CAPACITY because there is already a // value in the leaf. for idx in 0..(L_CAPACITY) { let v = 10 + 1 + idx; let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_6() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split of sl (same txn) // let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Now insert to trigger the needed actions. // Remember, we only need L_CAPACITY because there is already a // value in the leaf. for idx in 0..(L_CAPACITY) { let v = 20 + 1 + idx; let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_7() { // root // / \ // leaf split leaf // Insert to leaf then split leaf such that root has cloned // in step 1, but doesn't need clone in 2. let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(11, 11); assert!(r.is_none()); assert!(wcurs.verify()); let r = wcurs.insert(21, 21); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_split_8() { // root // / \ // leaf split leaf // ^ ^ // \---- nnode 1 \----- nnode 2 // // Check double leaf split of sl (same txn). This is to // take the clonesplit path in the branch case where branch already // cloned. // let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(19, 19); assert!(r.is_none()); assert!(wcurs.verify()); let r = wcurs.insert(29, 29); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_stress_1() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(L_CAPACITY << 4) { // println!("ITER v {}", v); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_stress_2() { // Insert descending let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in (1..(L_CAPACITY << 4)).rev() { // println!("ITER v {}", v); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_stress_3() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } // Add transaction-ised versions. #[test] fn test_bptree2_cursor_insert_stress_4() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in 1..(L_CAPACITY << 4) { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_stress_5() { // Insert descending let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in (1..(L_CAPACITY << 4)).rev() { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_insert_stress_6() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in ins.into_iter() { let mut wcurs = sb.create_writer(); let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_search_1() { let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(L_CAPACITY << 4) { let r = wcurs.insert(v, v); assert!(r.is_none()); let r = wcurs.search(&v); assert!(r.unwrap() == &v); } for v in 1..(L_CAPACITY << 4) { let r = wcurs.search(&v); assert!(r.unwrap() == &v); } // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_length_1() { // Check the length is consistent on operations. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(L_CAPACITY << 4) { let r = wcurs.insert(v, v); assert!(r.is_none()); } // println!("{} == {}", wcurs.len(), L_CAPACITY << 4); assert!(wcurs.len() == L_CAPACITY << 4); } #[test] fn test_bptree2_cursor_remove_01_p0() { // Check that a single value can be removed correctly without change. // Check that a missing value is removed as "None". // Check that emptying the root is ok. // BOTH of these need new txns to check clone, and then re-use txns. // // let lnode = create_leaf_node_full(0); let sb = SuperBlock::new_test(1, lnode); let mut wcurs = sb.create_writer(); // println!("{:?}", wcurs); for v in 0..L_CAPACITY { let x = wcurs.remove(&v); // println!("{:?}", wcurs); assert!(x == Some(v)); } for v in 0..L_CAPACITY { let x = wcurs.remove(&v); assert!(x.is_none()); } mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_01_p1() { let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); let _ = wcurs.remove(&0); // println!("{:?}", wcurs); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_02() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (new txn) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); // println!("{:?}", wcurs); assert!(wcurs.verify()); wcurs.remove(&20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_03() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "leaf" and merge right (really left, but you know ...). (new txn) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(30); let root = Node::new_branch(0, lnode, rnode); // Prevent the tree shrinking. unsafe { (*root).add_node(znode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_04p0() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup sibling leaf to already be cloned. wcurs.path_clone(&10); assert!(wcurs.verify()); wcurs.remove(&20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_04p1() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup leaf to already be cloned. wcurs.path_clone(&20); assert!(wcurs.verify()); wcurs.remove(&20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_05() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "leaf" and merge 'right'. (split leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(30); let root = Node::new_branch(0, lnode, rnode); // Prevent the tree shrinking. unsafe { (*root).add_node(znode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup leaf to already be cloned. wcurs.path_clone(&20); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_06() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - new // // when remove from rbranch, mergc left to lbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&30); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_07() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - new // // when remove from lbranch, merge right to rbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_08() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - full // rbranch - 2node // txn - new // // when remove from rbranch, borrow from lbranch // will NOT reduce height let lbranch = create_branch_node_full(0); let r1 = create_leaf_node(80); let r2 = create_leaf_node(90); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&80); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_09() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - full // txn - new // // when remove from lbranch, borrow from rbranch // will NOT reduce height let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let lbranch = Node::new_branch(0, l1, l2); let rbranch = create_branch_node_full(100); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_10() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - touch lbranch // // when remove from rbranch, mergc left to lbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(&0); wcurs.path_clone(&10); wcurs.remove(&30); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_11() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - touch rbranch // // when remove from lbranch, merge right to rbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(&20); wcurs.path_clone(&30); wcurs.remove(&0); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_12() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - full // rbranch - 2node // txn - touch lbranch // // when remove from rbranch, borrow from lbranch // will NOT reduce height let lbranch = create_branch_node_full(0); let r1 = create_leaf_node(80); let r2 = create_leaf_node(90); let rbranch = Node::new_branch(0, r1, r2); let root = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); // let count = BV_CAPACITY + 2; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(&0); wcurs.path_clone(&10); wcurs.path_clone(&20); wcurs.remove(&90); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_13() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - full // txn - touch rbranch // // when remove from lbranch, borrow from rbranch // will NOT reduce height let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let lbranch = Node::new_branch(0, l1, l2); let rbranch = create_branch_node_full(100); let root = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); for i in 0..BV_CAPACITY { let k = 100 + (10 * i); wcurs.path_clone(&k); } assert!(wcurs.verify()); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_14() { // Test leaf borrow left let lnode = create_leaf_node_full(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_15() { // Test leaf borrow right. let lnode = create_leaf_node(10) as *mut Node; let rnode = create_leaf_node_full(20) as *mut Node; let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(&10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } fn tree_create_rand() -> (SuperBlock, CursorRead) { let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let mut sb = unsafe { SuperBlock::new() }; let rdr = sb.create_reader(); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.insert(v, v); assert!(r.is_none()); assert!(wcurs.verify()); } let rdr = sb.pre_commit(wcurs, &rdr); (sb, rdr) } #[test] fn test_bptree2_cursor_remove_stress_1() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in 1..(L_CAPACITY << 4) { // println!("-- ITER v {}", v); let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); } // println!("{:?}", wcurs); let rdr2 = sb.pre_commit(wcurs, &rdr); // On shutdown, check we dropped all as needed. std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_stress_2() { // Insert descending let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in (1..(L_CAPACITY << 4)).rev() { // println!("ITER v {}", v); let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); } let rdr2 = sb.pre_commit(wcurs, &rdr); std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_stress_3() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); } let rdr2 = sb.pre_commit(wcurs, &rdr); std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } // Add transaction-ised versions. #[test] fn test_bptree2_cursor_remove_stress_4() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let (mut sb, mut rdr) = tree_create_rand(); for v in 1..(L_CAPACITY << 4) { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_stress_5() { // Insert descending let (mut sb, mut rdr) = tree_create_rand(); for v in (1..(L_CAPACITY << 4)).rev() { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_remove_stress_6() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(L_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let (mut sb, mut rdr) = tree_create_rand(); for v in ins.into_iter() { let mut wcurs = sb.create_writer(); let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } /* #[test] #[cfg_attr(miri, ignore)] fn test_bptree2_cursor_remove_stress_7() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..10240).collect(); let node: *mut Leaf = Node::new_leaf(0); let mut wcurs = CursorWrite::new_test(1, node as *mut _); wcurs.extend(ins.iter().map(|v| (*v, *v))); ins.shuffle(&mut rng); let compacts = 0; for v in ins.into_iter() { let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); // let (l, m) = wcurs.tree_density(); // if l > 0 && (m / l) > 1 { // compacts += 1; // } } println!("compacts {:?}", compacts); } */ // This is for setting up trees that are specialised for the split off tests. // This is because we can exercise a LOT of complex edge cases by bracketing // within this tree. It also works on both node sizes. // // This is a 16 node tree, with 4 branches and a root. We have 2 values per leaf to // allow some cases to be explored. We also need "gaps" between the values to allow other // cases. // // Effectively this means we can test by splitoff on the values: // for i in [0,100,200,300]: // for j in [0, 10, 20, 30]: // t1 = i + j // t2 = i + j + 1 // t3 = i + j + 2 // t4 = i + j + 3 // fn create_split_off_leaf(base: usize) -> *mut Node { let l = Node::new_leaf(0); let lref = leaf_ref!(l, usize, usize); lref.insert_or_update(base + 1, base + 1); lref.insert_or_update(base + 2, base + 2); l as *mut _ } fn create_split_off_branch(base: usize) -> *mut Node { // This is a helper for create_split_off_tree to make the sub-branches based // on a base. let l1 = create_split_off_leaf(base); let l2 = create_split_off_leaf(base + 10); let l3 = create_split_off_leaf(base + 20); let l4 = create_split_off_leaf(base + 30); let branch = Node::new_branch(0, l1, l2); let nref = branch_ref!(branch, usize, usize); nref.add_node(l3); nref.add_node(l4); branch as *mut _ } fn create_split_off_tree() -> *mut Node { let b1 = create_split_off_branch(0); let b2 = create_split_off_branch(100); let b3 = create_split_off_branch(200); let b4 = create_split_off_branch(300); let root = Node::new_branch(0, b1, b2); let nref = branch_ref!(root, usize, usize); nref.add_node(b3); nref.add_node(b4); root as *mut _ } #[test] fn test_bptree2_cursor_split_off_lt_01() { // Make a tree witth just a leaf // Do a split_off_lt. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); wcurs.split_off_lt(&5); // Remember, all the cases of the remove_lte are already tested on // leaf. assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_split_off_lt_02() { // Make a tree witth just a leaf // Do a split_off_lt. let node = create_leaf_node_full(10); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); wcurs.split_off_lt(&11); // Remember, all the cases of the remove_lte are already tested on // leaf. assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_split_off_lt_03() { // Make a tree witth just a leaf // Do a split_off_lt. let node = create_leaf_node_full(10); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); wcurs.path_clone(&11); wcurs.split_off_lt(&11); // Remember, all the cases of the remove_lte are already tested on // leaf. assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } fn run_split_off_test_clone(v: usize, _exp: usize) { // println!("RUNNING -> {:?}", v); let tree = create_split_off_tree(); let sb = SuperBlock::new_test(1, tree); let mut wcurs = sb.create_writer(); // 0 is min, and not present, will cause no change. // clone everything let outer: [usize; 4] = [0, 100, 200, 300]; let inner: [usize; 4] = [0, 10, 20, 30]; for i in outer.iter() { for j in inner.iter() { wcurs.path_clone(&(i + j + 1)); } } wcurs.split_off_lt(&v); assert!(wcurs.verify()); if v > 0 { assert!(!wcurs.contains_key(&(v - 1))); } // assert!(wcurs.len() == exp); // println!("{:?}", wcurs); mem::drop(wcurs); mem::drop(sb); assert_released(); } fn run_split_off_test(v: usize, _exp: usize) { // println!("RUNNING -> {:?}", v); let tree = create_split_off_tree(); // println!("START -> {:?}", tree); let sb = SuperBlock::new_test(1, tree); let mut wcurs = sb.create_writer(); // 0 is min, and not present, will cause no change. wcurs.split_off_lt(&v); assert!(wcurs.verify()); if v > 0 { assert!(!wcurs.contains_key(&(v - 1))); } // assert!(wcurs.len() == exp); // println!("{:?}", wcurs); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_bptree2_cursor_split_off_lt_clone_stress() { // Can't proceed as the "fake" tree we make is invalid. debug_assert!(L_CAPACITY >= 4); let outer: [usize; 4] = [0, 100, 200, 300]; let inner: [usize; 4] = [0, 10, 20, 30]; for i in outer.iter() { for j in inner.iter() { run_split_off_test_clone(i + j, 32); run_split_off_test_clone(i + j + 1, 32); run_split_off_test_clone(i + j + 2, 32); run_split_off_test_clone(i + j + 3, 32); } } } #[test] fn test_bptree2_cursor_split_off_lt_stress() { debug_assert!(L_CAPACITY >= 4); let outer: [usize; 4] = [0, 100, 200, 300]; let inner: [usize; 4] = [0, 10, 20, 30]; for i in outer.iter() { for j in inner.iter() { run_split_off_test(i + j, 32); run_split_off_test(i + j + 1, 32); run_split_off_test(i + j + 2, 32); run_split_off_test(i + j + 3, 32); } } } #[test] #[cfg_attr(miri, ignore)] fn test_bptree2_cursor_split_off_lt_random_stress() { let data: Vec = (0..1024).collect(); for v in data.iter() { let node: *mut Leaf = Node::new_leaf(0) as *mut _; let sb = SuperBlock::new_test(1, node as *mut Node); let mut wcurs = sb.create_writer(); wcurs.extend(data.iter().map(|v| (*v, *v))); if v > &0 { assert!(wcurs.contains_key(&(v - 1))); } wcurs.split_off_lt(v); assert!(!wcurs.contains_key(&(v - 1))); if v < &1024 { assert!(wcurs.contains_key(v)); } assert!(wcurs.verify()); let contents: Vec<_> = wcurs.k_iter().collect(); assert!(contents[0] == v); assert!(contents.len() as isize == (1024 - v)); } } #[test] fn test_bptree_cursor_double_extend() { let node: *mut Leaf = Node::new_leaf(0) as *mut _; let sb = SuperBlock::new_test(1, node as *mut Node); let mut wcurs = sb.create_writer(); wcurs.extend([(0, 0), (1, 1), (2, 2), (3, 3)]); assert!(wcurs.len() == 4); assert!(wcurs.verify()); wcurs.extend([(2, 2), (3, 3), (4, 4), (5, 5)]); assert!(wcurs.len() == 6); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } /* #[test] fn test_bptree_cursor_get_mut_ref_1() { // Test that we can clone a path (new txn) // Test that we don't re-clone. let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut wcurs = CursorWrite::new(root, 0); assert!(wcurs.verify()); let r1 = wcurs.get_mut_ref(&10); std::mem::drop(r1); let r1 = wcurs.get_mut_ref(&10); std::mem::drop(r1); } */ } concread-0.4.6/src/internals/bptree/iter.rs000064400000000000000000001744211046102023000170020ustar 00000000000000//! Iterators for the map. // Iterators for the bptree use super::node::{Branch, Leaf, Meta, Node}; use std::borrow::Borrow; use std::collections::VecDeque; use std::fmt::Debug; use std::marker::PhantomData; use std::ops::{Bound, RangeBounds}; pub(crate) struct LeafIter<'a, K, V> where K: Ord + Clone + Debug, V: Clone, { stack: VecDeque<(*mut Node, usize)>, phantom_k: PhantomData<&'a K>, phantom_v: PhantomData<&'a V>, } impl LeafIter<'_, K, V> where K: Clone + Ord + Debug, V: Clone, { pub(crate) fn new(root: *mut Node, bound: Bound<&T>) -> Self where T: Ord + ?Sized, K: Borrow, { // We need to position the VecDeque here. let mut stack = VecDeque::new(); let mut work_node = root; loop { if self_meta!(work_node).is_leaf() { stack.push_back((work_node, 0)); break; } else { match bound { Bound::Excluded(q) | Bound::Included(q) => { let bref = branch_ref!(work_node, K, V); let idx = bref.locate_node(q); // This is the index we are currently chasing from // within this node. stack.push_back((work_node, idx)); work_node = bref.get_idx_unchecked(idx); } Bound::Unbounded => { stack.push_back((work_node, 0)); work_node = branch_ref!(work_node, K, V).get_idx_unchecked(0); } } } } // eprintln!("{:?}", stack); LeafIter { stack, phantom_k: PhantomData, phantom_v: PhantomData, } } #[cfg(test)] pub(crate) fn new_base() -> Self { LeafIter { stack: VecDeque::new(), phantom_k: PhantomData, phantom_v: PhantomData, } } pub(crate) fn stack_position(&mut self) { debug_assert!(match self.stack.back() { Some((node, _)) => { self_meta!(*node).is_branch() } None => true, }); 'outer: loop { // Get the current branch, it must the the back. if let Some((bref, bpidx)) = self.stack.back_mut() { let wbranch = branch_ref!(*bref, K, V); // We were currently looking at bpidx in bref. Increment and // check whats next. *bpidx += 1; if let Some(node) = wbranch.get_idx_checked(*bpidx) { // Got the new node, continue down. let mut work_node = node; loop { self.stack.push_back((work_node, 0)); if self_meta!(work_node).is_leaf() { break 'outer; } else { work_node = branch_ref!(work_node, K, V).get_idx_unchecked(0); } } } else { let _ = self.stack.pop_back(); continue 'outer; } } else { // Must have been none, so we are exhausted. This means // the stack is empty, so return. break 'outer; } } // Done! } pub(crate) fn get_mut(&mut self) -> Option<&mut (*mut Node, usize)> { self.stack.back_mut() } pub(crate) fn clear(&mut self) { self.stack.clear() } pub(crate) fn is_empty(&self) -> bool { self.stack.is_empty() } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for LeafIter<'a, K, V> { type Item = &'a Leaf; fn next(&mut self) -> Option { // base case is the vecdeque is empty let (leafref, _) = match self.stack.pop_back() { Some(lr) => lr, None => return None, }; // Setup the veqdeque for the next iteration. self.stack_position(); // Return the leaf as we found at the start, regardless of the // stack operations. Some(leaf_ref!(leafref, K, V)) } fn size_hint(&self) -> (usize, Option) { (0, None) } } pub(crate) struct RevLeafIter<'a, K, V> where K: Ord + Clone + Debug, V: Clone, { stack: VecDeque<(*mut Node, usize)>, phantom_k: PhantomData<&'a K>, phantom_v: PhantomData<&'a V>, } impl RevLeafIter<'_, K, V> where K: Clone + Ord + Debug, V: Clone, { pub(crate) fn new(root: *mut Node, bound: Bound<&T>) -> Self where T: Ord + ?Sized, K: Borrow, { // We need to position the VecDeque here. let mut stack = VecDeque::new(); let mut work_node = root; loop { if self_meta!(work_node).is_leaf() { // Put in the max len here ... let lref = leaf_ref!(work_node, K, V); if lref.count() > 0 { stack.push_back((work_node, lref.count() - 1)); } break; } else { let bref = branch_ref!(work_node, K, V); match bound { Bound::Excluded(q) | Bound::Included(q) => { let idx = bref.locate_node(q); // This is the index we are currently chasing from // within this node. stack.push_back((work_node, idx)); work_node = bref.get_idx_unchecked(idx); } Bound::Unbounded => { // count shows the most right node. stack.push_back((work_node, bref.count())); work_node = branch_ref!(work_node, K, V).get_idx_unchecked(bref.count()); } } } } // eprintln!("{:?}", stack); RevLeafIter { stack, phantom_k: PhantomData, phantom_v: PhantomData, } } #[cfg(test)] pub(crate) fn new_base() -> Self { RevLeafIter { stack: VecDeque::new(), phantom_k: PhantomData, phantom_v: PhantomData, } } pub(crate) fn stack_position(&mut self) { debug_assert!(match self.stack.back() { Some((node, _)) => { self_meta!(*node).is_branch() } None => true, }); 'outer: loop { // Get the current branch, it must the the back. if let Some((bref, bpidx)) = self.stack.back_mut() { let wbranch = branch_ref!(*bref, K, V); // We were currently looking at bpidx in bref. Increment and // check whats next. // NOTE: If this underflows, it's okay because idx_checked won't // return the Some case! let (nidx, oflow) = (*bpidx).overflowing_sub(1); if oflow { let _ = self.stack.pop_back(); continue 'outer; } *bpidx = nidx; if let Some(node) = wbranch.get_idx_checked(*bpidx) { // Got the new node, continue down. let mut work_node = node; loop { if self_meta!(work_node).is_leaf() { let lref = leaf_ref!(work_node, K, V); self.stack.push_back((work_node, lref.count() - 1)); break 'outer; } else { let bref = branch_ref!(work_node, K, V); let idx = bref.count(); self.stack.push_back((work_node, idx)); work_node = bref.get_idx_unchecked(idx); } } } else { let _ = self.stack.pop_back(); continue 'outer; } } else { // Must have been none, so we are exhausted. This means // the stack is empty, so return. break 'outer; } } } pub(crate) fn get_mut(&mut self) -> Option<&mut (*mut Node, usize)> { self.stack.back_mut() } pub(crate) fn clear(&mut self) { self.stack.clear() } pub(crate) fn is_empty(&self) -> bool { self.stack.is_empty() } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for RevLeafIter<'a, K, V> { type Item = &'a Leaf; fn next(&mut self) -> Option { // base case is the vecdeque is empty let (leafref, _) = match self.stack.pop_back() { Some(lr) => lr, None => return None, }; // Setup the veqdeque for the next iteration. self.stack_position(); // Return the leaf as we found at the start, regardless of the // stack operations. Some(leaf_ref!(leafref, K, V)) } fn size_hint(&self) -> (usize, Option) { (0, None) } } // Wrappers /// Iterator over references to Key Value pairs stored in the map. pub struct Iter<'n, 'a, K, V> where K: Ord + Clone + Debug, V: Clone, { iter: RangeIter<'n, 'a, K, V>, } impl Iter<'_, '_, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { let bounds: (Bound, Bound) = (Bound::Unbounded, Bound::Unbounded); let iter = RangeIter::new(root, bounds, length); Iter { iter } } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for Iter<'_, 'a, K, V> { type Item = (&'a K, &'a V); /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next() } /// Provide a hint as to the number of items this iterator will yield. fn size_hint(&self) -> (usize, Option) { match self.iter.size_hint() { // Transpose the x through as a lower bound. (_, Some(x)) => (x, Some(x)), (_, None) => (0, None), } } } impl DoubleEndedIterator for Iter<'_, '_, K, V> { /// Yield the next key value reference, or `None` if exhausted. fn next_back(&mut self) -> Option { self.iter.next_back() } } /// Iterater over references to Keys stored in the map. pub struct KeyIter<'n, 'a, K, V> where K: Ord + Clone + Debug, V: Clone, { iter: Iter<'n, 'a, K, V>, } impl KeyIter<'_, '_, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { KeyIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for KeyIter<'_, 'a, K, V> { type Item = &'a K; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(k, _)| k) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl DoubleEndedIterator for KeyIter<'_, '_, K, V> { /// Yield the next key value reference, or `None` if exhausted. fn next_back(&mut self) -> Option { self.iter.next_back().map(|(k, _)| k) } } /// Iterater over references to Values stored in the map. pub struct ValueIter<'n, 'a, K, V> where K: Ord + Clone + Debug, V: Clone, { iter: Iter<'n, 'a, K, V>, } impl ValueIter<'_, '_, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { ValueIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for ValueIter<'_, 'a, K, V> { type Item = &'a V; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(_, v)| v) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl DoubleEndedIterator for ValueIter<'_, '_, K, V> { /// Yield the next key value reference, or `None` if exhausted. fn next_back(&mut self) -> Option { self.iter.next_back().map(|(_, v)| v) } } /// Iterator over references to Key Value pairs stored, bounded by a range. pub struct RangeIter<'n, 'a, K, V> where K: Ord + Clone + Debug, V: Clone, { length: Option, left_iter: LeafIter<'a, K, V>, right_iter: RevLeafIter<'a, K, V>, phantom_k: PhantomData<&'a K>, phantom_v: PhantomData<&'a V>, phantom_root: PhantomData<&'n ()>, } impl RangeIter<'_, '_, K, V> where K: Clone + Ord + Debug, V: Clone, { pub(crate) fn new(root: *mut Node, range: R, length: usize) -> Self where T: Ord + ?Sized, K: Borrow, R: RangeBounds, { let length = Some(length); // We need to position the VecDeque here. This requires us // to know the bounds that we have. We do this similar to the main // rust library tree by locating our "edges", and maintaining stacks to their paths. // Setup and position the two iters. let mut left_iter = LeafIter::new(root, range.start_bound()); let mut right_iter = RevLeafIter::new(root, range.end_bound()); //If needed, advanced the left / right iter depending on the situation. match range.start_bound() { Bound::Unbounded => { // Do nothing! } Bound::Included(k) => { if let Some((node, idx)) = left_iter.get_mut() { let leaf = leaf_ref!(*node, K, V); // eprintln!("Positioning Included with ... {:?} {:?}", leaf, idx); match leaf.locate(k) { Ok(fidx) | Err(fidx) => { // eprintln!("Using, {}", fidx); *idx = fidx; // Done! } } } else { // Do nothing, it's empty. } } Bound::Excluded(k) => { if let Some((node, idx)) = left_iter.get_mut() { let leaf = leaf_ref!(*node, K, V); // eprintln!("Positioning Excluded with ... {:?} {:?}", leaf, idx); match leaf.locate(k) { Ok(fidx) => { // eprintln!("Excluding Using, {}", fidx); *idx = fidx + 1; if *idx >= leaf.count() { // Okay, this means we overflowed to the next leaf, so just // advanced the leaf iter to the start of the next left_iter.next(); } // Done } Err(fidx) => { // eprintln!("Using, {}", fidx); *idx = fidx; // Done! } } } else { // Do nothing, the leaf iter is empty. } } } match range.end_bound() { Bound::Unbounded => { // Do nothing! } Bound::Included(k) => { if let Some((node, idx)) = right_iter.get_mut() { let leaf = leaf_ref!(*node, K, V); // eprintln!("Positioning Included with ... {:?} {:?}", leaf, idx); match leaf.locate(k) { Ok(fidx) => { *idx = fidx; } Err(fidx) => { // eprintln!("Using, {}", fidx); let (nidx, oflow) = fidx.overflowing_sub(1); if oflow { right_iter.next(); } else { *idx = nidx; } // Done! } } } else { // Do nothing, it's empty. } } Bound::Excluded(k) => { if let Some((node, idx)) = right_iter.get_mut() { let leaf = leaf_ref!(*node, K, V); // eprintln!("Positioning Included with ... {:?} {:?}", leaf, idx); match leaf.locate(k) { Ok(fidx) | Err(fidx) => { // eprintln!("Using, {}", fidx); let (nidx, oflow) = fidx.overflowing_sub(1); if oflow { right_iter.next(); } else { *idx = nidx; } // Done! } } } else { // Do nothing, it's empty. } } } // If either side is empty, it indicates a bound hit the end of the tree // and we can't proceed if left_iter.is_empty() || right_iter.is_empty() { left_iter.clear(); right_iter.clear(); } RangeIter { length, left_iter, right_iter, phantom_k: PhantomData, phantom_v: PhantomData, phantom_root: PhantomData, } } #[cfg(test)] pub(crate) fn new_base() -> Self { RangeIter { length: None, left_iter: LeafIter::new_base(), right_iter: RevLeafIter::new_base(), phantom_k: PhantomData, phantom_v: PhantomData, phantom_root: PhantomData, } } } impl<'a, K: Clone + Ord + Debug, V: Clone> Iterator for RangeIter<'_, 'a, K, V> { type Item = (&'a K, &'a V); /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { loop { if let Some((node, idx)) = self.left_iter.get_mut() { // eprintln!("Next with ... {:?} {:?}", node, idx); let leaf = leaf_ref!(*node, K, V); // Get idx checked. if let Some(r) = leaf.get_kv_idx_checked(*idx) { if let Some((rnode, ridx)) = self.right_iter.get_mut() { if rnode == node && idx == ridx { // eprintln!("Clearing lists, end condition reached"); // Was the node + index the same as right? // It means we just exhausted the list. self.right_iter.clear(); self.left_iter.clear(); return Some(r); } } let nidx = *idx + 1; if nidx >= leaf.count() { self.left_iter.next(); } else { *idx = nidx; } return Some(r); } else { // Go to the next leaf. self.left_iter.next(); continue; } } else { break None; } } } /// Provide a hint as to the number of items this iterator will yield. fn size_hint(&self) -> (usize, Option) { (0, self.length) } } impl DoubleEndedIterator for RangeIter<'_, '_, K, V> { /// Yield the next key value reference, or `None` if exhausted. fn next_back(&mut self) -> Option { loop { if let Some((node, idx)) = self.right_iter.get_mut() { let leaf = leaf_ref!(*node, K, V); // Get idx checked. if let Some(r) = leaf.get_kv_idx_checked(*idx) { if let Some((lnode, lidx)) = self.left_iter.get_mut() { if lnode == node && idx == lidx { // eprintln!("Clearing lists, end condition reached"); // Was the node + index the same as right? // It means we just exhausted the list. self.right_iter.clear(); self.left_iter.clear(); return Some(r); } } let (nidx, oflow) = (*idx).overflowing_sub(1); if oflow { self.right_iter.next(); } else { *idx = nidx; } return Some(r); } else { // Go to the next leaf. self.right_iter.next(); continue; } } else { break None; } } } } #[cfg(test)] mod tests { use super::super::cursor::SuperBlock; use super::super::node::{Branch, Leaf, Node, L_CAPACITY, L_CAPACITY_N1}; use super::{Iter, LeafIter, RangeIter, RevLeafIter}; use std::ops::Bound; use std::ops::Bound::*; fn create_leaf_node_full(vbase: usize) -> *mut Node { assert!(vbase % 10 == 0); let node = Node::new_leaf(0); { let nmut = leaf_ref!(node, usize, usize); for idx in 0..L_CAPACITY { let v = vbase + idx; nmut.insert_or_update(v, v); } } node as *mut _ } #[test] fn test_bptree2_iter_leafiter_1() { let test_iter: LeafIter = LeafIter::new_base(); assert!(test_iter.count() == 0); } #[test] fn test_bptree2_iter_leafiter_2() { let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, Unbounded); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_leafiter_3() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Unbounded); let lref = test_iter.next().unwrap(); let rref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(rref.min() == &20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_4() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Unbounded); let l1ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l1ref.min() == &10); assert!(r1ref.min() == &20); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_5() { let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, Unbounded); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_1() { // Test a lower bound that is *minimum*. let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, Included(&0)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_2() { // Test a lower bound that is *within*. let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, Included(&10)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_3() { // Test a lower bound that is *greater*. let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, Included(&100)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_4() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&0)); // Cursor should be positioned on the node with "10" let lref = test_iter.next().unwrap(); let rref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(rref.min() == &20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_5() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&10)); // Cursor should be positioned on the node with "10" let lref = test_iter.next().unwrap(); let rref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(rref.min() == &20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_6() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&19)); // Cursor should be positioned on the node with "10" let lref = test_iter.next().unwrap(); let rref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(rref.min() == &20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_7() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); eprintln!("{:?}, {:?}", lnode, rnode); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&20)); // Cursor should be positioned on the node with "20" let lref = test_iter.next().unwrap(); assert!(lref.min() == &20); let x = test_iter.next(); eprintln!("{:?}", x); assert!(x.is_none()); // assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_8() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&100)); // Cursor should be positioned on the node with "20" let lref = test_iter.next().unwrap(); assert!(lref.min() == &20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_9() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&0)); // Should be on the 10 let l1ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l1ref.min() == &10); assert!(r1ref.min() == &20); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_10() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&15)); // Should be on the 10 let l1ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l1ref.min() == &10); assert!(r1ref.min() == &20); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_11() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&20)); // Should be on the 20 let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(r1ref.min() == &20); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_12() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&25)); // Should be on the 20 let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(r1ref.min() == &20); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_13() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&30)); // Should be on the 30 let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_14() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&35)); // Should be on the 30 let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l2ref.min() == &30); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_15() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&40)); // Should be on the 40 let r2ref = test_iter.next().unwrap(); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_leafiter_bound_16() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, Included(&100)); // Should be on the 40 let r2ref = test_iter.next().unwrap(); assert!(r2ref.min() == &40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } // == Reverse Leaf Iter == #[test] fn test_bptree2_iter_revleafiter_1() { let test_iter: RevLeafIter = RevLeafIter::new_base(); assert!(test_iter.count() == 0); } #[test] fn test_bptree2_iter_revleafiter_2() { let lnode = create_leaf_node_full(10); let mut test_iter = RevLeafIter::new(lnode, Unbounded); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_revleafiter_3() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Unbounded); let rref = test_iter.next().unwrap(); let lref = test_iter.next().unwrap(); assert!(rref.min() == &20); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_4() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Unbounded); let r2ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(r2ref.min() == &40); assert!(l2ref.min() == &30); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_1() { // Test an upper bound that is *maximum*. let lnode = create_leaf_node_full(10); let mut test_iter = RevLeafIter::new(lnode, Included(&100)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_2() { // Test a lower bound that is *within*. let lnode = create_leaf_node_full(10); let mut test_iter = RevLeafIter::new(lnode, Included(&10)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_3() { // Test a lower bound that is *minimum*. let lnode = create_leaf_node_full(10); let mut test_iter = RevLeafIter::new(lnode, Included(&0)); let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_5() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&10)); // Cursor should be positioned on the node with "10" let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_6() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&19)); // Cursor should be positioned on the node with "10" let lref = test_iter.next().unwrap(); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_7() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); eprintln!("{:?}, {:?}", lnode, rnode); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&20)); // Cursor should be positioned on the node with "20" let rref = test_iter.next().unwrap(); let lref = test_iter.next().unwrap(); assert!(rref.min() == &20); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_8() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&100)); // Cursor should be positioned on the node with "20" let rref = test_iter.next().unwrap(); let lref = test_iter.next().unwrap(); assert!(rref.min() == &20); assert!(lref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_9() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&100)); // Should be on the 40 let r2ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(r2ref.min() == &40); assert!(l2ref.min() == &30); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_10() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&45)); // Should be on the 40 let r2ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(r2ref.min() == &40); assert!(l2ref.min() == &30); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_11() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&35)); // Should be on the 30 let l2ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(l2ref.min() == &30); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_12() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&30)); // Should be on the 30 let l2ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(l2ref.min() == &30); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_13() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&25)); // Should be on the 20 let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_14() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&20)); // Should be on the 20 let r1ref = test_iter.next().unwrap(); let l1ref = test_iter.next().unwrap(); assert!(r1ref.min() == &20); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_15() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&15)); // Should be on the 10 let l1ref = test_iter.next().unwrap(); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_revleafiter_bound_16() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: RevLeafIter = RevLeafIter::new(root as *mut _, Included(&0)); // Should be on the 10 let l1ref = test_iter.next().unwrap(); assert!(l1ref.min() == &10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_iter_1() { // Make a tree let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let test_iter: Iter = Iter::new(root as *mut _, L_CAPACITY * 2); assert!(test_iter.size_hint() == (L_CAPACITY * 2, Some(L_CAPACITY * 2))); assert!(test_iter.count() == L_CAPACITY * 2); // Iterate! // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_iter_2() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let test_iter: Iter = Iter::new(root as *mut _, L_CAPACITY * 4); // println!("{:?}", test_iter.size_hint()); assert!(test_iter.size_hint() == (L_CAPACITY * 4, Some(L_CAPACITY * 4))); assert!(test_iter.count() == L_CAPACITY * 4); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_rangeiter_1() { let test_iter: RangeIter = RangeIter::new_base(); assert!(test_iter.count() == 0); } #[test] fn test_bptree2_iter_rangeiter_2() { let lnode = create_leaf_node_full(10); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let test_iter = RangeIter::new(lnode, bounds, L_CAPACITY); assert!(test_iter.count() == L_CAPACITY); for i in 0..L_CAPACITY { let l_bound = 10 + i; let bounds: (Bound, Bound) = (Included(l_bound), Unbounded); let test_iter = RangeIter::new(lnode, bounds, L_CAPACITY); let i_count = test_iter.count(); let x_count = L_CAPACITY - i; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } for i in 0..L_CAPACITY { let l_bound = 10 + i; let bounds: (Bound, Bound) = (Excluded(l_bound), Unbounded); let test_iter = RangeIter::new(lnode, bounds, L_CAPACITY); let i_count = test_iter.count(); let x_count = L_CAPACITY_N1 - i; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_bptree2_iter_rangeiter_3() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); assert!(test_iter.count() == (L_CAPACITY * 4)); for j in 1..5 { for i in 0..L_CAPACITY { let l_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Included(l_bound), Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.count(); let x_count = ((5 - j) * L_CAPACITY) - i; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } for j in 1..5 { for i in 0..L_CAPACITY { let l_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Excluded(l_bound), Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.count(); let x_count = ((5 - j) * L_CAPACITY) - (i + 1); eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_rangeiter_4() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); assert!(test_iter.count() == (L_CAPACITY * 4)); for j in 1..5 { for i in 0..L_CAPACITY { let r_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Unbounded, Included(r_bound)); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.count(); let x_count = ((L_CAPACITY * 4) - (((4 - j) * L_CAPACITY) + (L_CAPACITY - i))) + 1; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } for j in 1..5 { for i in 0..L_CAPACITY { let r_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Unbounded, Excluded(r_bound)); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.count(); let x_count = (L_CAPACITY * 4) - (((4 - j) * L_CAPACITY) + (L_CAPACITY - i)); eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_rangeiter_5() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); assert!(test_iter.rev().count() == (L_CAPACITY * 4)); for j in 1..5 { for i in 0..L_CAPACITY { let l_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Included(l_bound), Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.rev().count(); let x_count = ((5 - j) * L_CAPACITY) - i; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } for j in 1..5 { for i in 0..L_CAPACITY { let l_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Excluded(l_bound), Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.rev().count(); let x_count = ((5 - j) * L_CAPACITY) - (i + 1); eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_bptree2_iter_rangeiter_6() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); assert!(test_iter.rev().count() == (L_CAPACITY * 4)); for j in 1..5 { for i in 0..L_CAPACITY { let r_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Unbounded, Included(r_bound)); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.rev().count(); let x_count = ((L_CAPACITY * 4) - (((4 - j) * L_CAPACITY) + (L_CAPACITY - i))) + 1; eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } for j in 1..5 { for i in 0..L_CAPACITY { let r_bound = (j * 10) + i; let bounds: (Bound, Bound) = (Unbounded, Excluded(r_bound)); let test_iter: RangeIter = RangeIter::new(root as *mut _, bounds, L_CAPACITY * 4); let i_count = test_iter.rev().count(); let x_count = (L_CAPACITY * 4) - (((4 - j) * L_CAPACITY) + (L_CAPACITY - i)); eprintln!("ex {} == {}", i_count, x_count); assert!(i_count == x_count); } } // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } } concread-0.4.6/src/internals/bptree/macros.rs000064400000000000000000000016541046102023000173200ustar 00000000000000macro_rules! debug_assert_leaf { ($x:expr) => {{ debug_assert!($x.meta.is_leaf()); }}; } macro_rules! debug_assert_branch { ($x:expr) => {{ debug_assert!($x.meta.is_branch()); }}; } macro_rules! self_meta { ($x:expr) => {{ unsafe { &mut *($x as *mut Meta) } }}; } macro_rules! branch_ref { ($x:expr, $k:ty, $v:ty) => {{ debug_assert!(unsafe { (*$x).meta.is_branch() }); unsafe { &mut *($x as *mut Branch<$k, $v>) } }}; } macro_rules! leaf_ref { ($x:expr, $k:ty, $v:ty) => {{ debug_assert!(unsafe { (*$x).meta.is_leaf() }); unsafe { &mut *($x as *mut Leaf<$k, $v>) } }}; } macro_rules! key_search { ($self:expr, $k:expr) => {{ let (left, _) = $self.key.split_at($self.count()); let inited: &[K] = unsafe { slice::from_raw_parts(left.as_ptr() as *const K, left.len()) }; slice_search_linear(inited, $k) }}; } concread-0.4.6/src/internals/bptree/mod.rs000064400000000000000000000002401046102023000166010ustar 00000000000000//! Nothing to see here. #[macro_use] pub(crate) mod macros; pub(crate) mod cursor; pub mod iter; pub mod mutiter; pub(crate) mod node; pub(crate) mod states; concread-0.4.6/src/internals/bptree/mutiter.rs000064400000000000000000000102161046102023000175170ustar 00000000000000//! Mutable iterators over bptree. use std::borrow::Borrow; // use std::collections::VecDeque; use std::fmt::Debug; use std::marker::PhantomData; use std::ops::RangeBounds; use super::cursor::{CursorReadOps, CursorWrite}; use super::iter::RangeIter; /// Mutable Iterator over references to Key Value pairs stored, bounded by a range. pub struct RangeMutIter<'n, 'a, K, V> where K: Ord + Clone + Debug, V: Clone, { cursor: &'n mut CursorWrite, inner_range_iter: RangeIter<'n, 'a, K, V>, phantom_k: PhantomData<&'a K>, phantom_v: PhantomData<&'a V>, } impl<'n, K, V> RangeMutIter<'n, '_, K, V> where K: Clone + Ord + Debug, V: Clone, { pub(crate) fn new(cursor: &'n mut CursorWrite, range: R) -> Self where T: Ord + ?Sized, K: Borrow, R: RangeBounds, { // For now I'm doing this in the "stupidest way possible". // // The reason is we could do this with a more advanced iterator that determines // clones as we go, but that's quite a bit more work. For now if we just use // an existing iterator and get_mut_ref as we go, we get the same effect. // // This relies on the fact that we take &mut over cursor, so the keys can't // change during this, so even if we iterator over the ro-snapshot, the keys // still sync to the rw version. let length = cursor.len(); let root = cursor.get_root(); let inner_range_iter = RangeIter::new(root, range, length); RangeMutIter { cursor, inner_range_iter, phantom_k: PhantomData, phantom_v: PhantomData, } } } impl<'n, 'a, K: Clone + Ord + Debug, V: Clone> Iterator for RangeMutIter<'n, 'a, K, V> where 'n: 'a, { type Item = (&'a K, &'a mut V); /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { if let Some((k, _)) = self.inner_range_iter.next() { self.cursor.get_mut_ref(k).map(|v| { // Rust's lifetime constraints aren't working here, and this is // yielding 'n when we need '_ which is shorter. So we force strip // and apply the lifetime to constrain it to this iterator. let v = v as *mut V; let v = unsafe { &mut *v as &mut V }; (k, v) }) } else { None } } /// Provide a hint as to the number of items this iterator will yield. fn size_hint(&self) -> (usize, Option) { (0, Some(self.cursor.len())) } } #[cfg(test)] mod tests { use super::super::cursor::SuperBlock; use super::super::node::{Leaf, Node, L_CAPACITY}; use super::RangeMutIter; use std::ops::Bound; use std::ops::Bound::*; use crate::internals::lincowcell::LinCowCellCapable; fn create_leaf_node_full(vbase: usize) -> *mut Node { assert!(vbase % 10 == 0); let node = Node::new_leaf(0); { let nmut = leaf_ref!(node, usize, usize); for idx in 0..L_CAPACITY { let v = vbase + idx; nmut.insert_or_update(v, v); } } node as *mut _ } #[test] fn test_bptree2_iter_mutrangeiter_1() { let node = create_leaf_node_full(10); let sb = SuperBlock::new_test(1, node as *mut Node); let mut wcurs = sb.create_writer(); let bounds: (Bound, Bound) = (Unbounded, Unbounded); let range_mut_iter = RangeMutIter::new(&mut wcurs, bounds); for (k, v_mut) in range_mut_iter { assert_eq!(*k, *v_mut); } let bounds: (Bound, Bound) = (Unbounded, Unbounded); let range_mut_iter = RangeMutIter::new(&mut wcurs, bounds); for (_k, v_mut) in range_mut_iter { *v_mut += 1; } let bounds: (Bound, Bound) = (Unbounded, Unbounded); let range_mut_iter = RangeMutIter::new(&mut wcurs, bounds); for (k, v_mut) in range_mut_iter { assert_eq!(*k + 1, *v_mut); } } } concread-0.4.6/src/internals/bptree/node.rs000064400000000000000000002541261046102023000167650ustar 00000000000000use super::states::*; use crate::utils::*; // use libc::{c_void, mprotect, PROT_READ, PROT_WRITE}; use crossbeam_utils::CachePadded; use std::borrow::Borrow; use std::fmt::{self, Debug, Error}; use std::marker::PhantomData; use std::mem::MaybeUninit; use std::ptr; use std::slice; #[cfg(test)] use std::collections::BTreeSet; #[cfg(all(test, not(miri)))] use std::sync::atomic::{AtomicUsize, Ordering}; #[cfg(all(test, not(miri)))] use std::sync::Mutex; pub(crate) const TXID_MASK: u64 = 0x0fff_ffff_ffff_fff0; const FLAG_MASK: u64 = 0xf000_0000_0000_0000; const COUNT_MASK: u64 = 0x0000_0000_0000_000f; pub(crate) const TXID_SHF: usize = 4; const FLAG_BRANCH: u64 = 0x1000_0000_0000_0000; const FLAG_LEAF: u64 = 0x2000_0000_0000_0000; const FLAG_INVALID: u64 = 0x4000_0000_0000_0000; // const FLAG_HASH: u64 = 0x4000_0000_0000_0000; // const FLAG_BUCKET: u64 = 0x8000_0000_0000_0000; const FLAG_DROPPED: u64 = 0xaaaa_bbbb_cccc_dddd; #[cfg(feature = "skinny")] pub(crate) const L_CAPACITY: usize = 3; #[cfg(feature = "skinny")] pub(crate) const L_CAPACITY_N1: usize = L_CAPACITY - 1; #[cfg(feature = "skinny")] pub(crate) const BV_CAPACITY: usize = L_CAPACITY + 1; #[cfg(not(feature = "skinny"))] pub(crate) const L_CAPACITY: usize = 7; #[cfg(not(feature = "skinny"))] pub(crate) const L_CAPACITY_N1: usize = L_CAPACITY - 1; #[cfg(not(feature = "skinny"))] pub(crate) const BV_CAPACITY: usize = L_CAPACITY + 1; #[cfg(all(test, not(miri)))] thread_local!(static NODE_COUNTER: AtomicUsize = const { AtomicUsize::new(1) }); #[cfg(all(test, not(miri)))] thread_local!(static ALLOC_LIST: Mutex> = const { Mutex::new(BTreeSet::new()) }); #[cfg(all(test, not(miri)))] fn alloc_nid() -> usize { let nid: usize = NODE_COUNTER.with(|nc| nc.fetch_add(1, Ordering::AcqRel)); #[cfg(all(test, not(miri)))] { ALLOC_LIST.with(|llist| llist.lock().unwrap().insert(nid)); } // eprintln!("Allocate -> {:?}", nid); nid } #[cfg(all(test, not(miri)))] fn release_nid(nid: usize) { // println!("Release -> {:?}", nid); // debug_assert!(nid != 3); let r = ALLOC_LIST.with(|llist| llist.lock().unwrap().remove(&nid)); assert!(r); } #[cfg(test)] pub(crate) fn assert_released() { #[cfg(not(miri))] { let is_empt = ALLOC_LIST.with(|llist| { let x = llist.lock().unwrap(); eprintln!("Assert Released - Remaining -> {:?}", x); x.is_empty() }); assert!(is_empt); } } #[repr(C)] pub(crate) struct Meta(u64); #[repr(C)] pub(crate) struct Branch where K: Ord + Clone + Debug, V: Clone, { pub(crate) meta: Meta, key: [MaybeUninit; L_CAPACITY], nodes: [*mut Node; BV_CAPACITY], #[cfg(all(test, not(miri)))] pub(crate) nid: usize, } #[repr(C)] pub(crate) struct Leaf where K: Ord + Clone + Debug, V: Clone, { pub(crate) meta: Meta, key: [MaybeUninit; L_CAPACITY], values: [MaybeUninit; L_CAPACITY], #[cfg(all(test, not(miri)))] pub(crate) nid: usize, } #[repr(C)] pub(crate) struct Node { pub(crate) meta: Meta, k: PhantomData, v: PhantomData, } unsafe impl Send for Node { } unsafe impl Sync for Node { } /* pub(crate) union NodeX where K: Ord + Clone + Debug, V: Clone, { meta: Meta, leaf: Leaf, branch: Branch, } */ impl Node { pub(crate) fn new_leaf(txid: u64) -> *mut Leaf { // println!("Req new leaf"); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); let x: Box>> = Box::new(CachePadded::new(Leaf { meta: Meta((txid << TXID_SHF) | FLAG_LEAF), key: unsafe { MaybeUninit::uninit().assume_init() }, values: unsafe { MaybeUninit::uninit().assume_init() }, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); Box::into_raw(x) as *mut Leaf } fn new_leaf_ins(flags: u64, k: K, v: V) -> *mut Leaf { // println!("Req new leaf ins"); // debug_assert!(false); debug_assert!((flags & FLAG_MASK) == FLAG_LEAF); // Let the flag, txid and the count of value 1 through. let txid = flags & (TXID_MASK | FLAG_MASK | 1); let x: Box>> = Box::new(CachePadded::new(Leaf { meta: Meta(txid), #[cfg(feature = "skinny")] key: [ MaybeUninit::new(k), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(not(feature = "skinny"))] key: [ MaybeUninit::new(k), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(feature = "skinny")] values: [ MaybeUninit::new(v), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(not(feature = "skinny"))] values: [ MaybeUninit::new(v), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); Box::into_raw(x) as *mut Leaf } pub(crate) fn new_branch( txid: u64, l: *mut Node, r: *mut Node, ) -> *mut Branch { // println!("Req new branch"); debug_assert!(!l.is_null()); debug_assert!(!r.is_null()); debug_assert!(unsafe { (*l).verify() }); debug_assert!(unsafe { (*r).verify() }); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); let x: Box>> = Box::new(CachePadded::new(Branch { // This sets the default (key) count to 1, since we take an l/r meta: Meta((txid << TXID_SHF) | FLAG_BRANCH | 1), #[cfg(feature = "skinny")] key: [ MaybeUninit::new(unsafe { (*r).min().clone() }), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(not(feature = "skinny"))] key: [ MaybeUninit::new(unsafe { (*r).min().clone() }), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(feature = "skinny")] nodes: [l, r, ptr::null_mut(), ptr::null_mut()], #[cfg(not(feature = "skinny"))] nodes: [ l, r, ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ], #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); debug_assert!(x.verify()); Box::into_raw(x) as *mut Branch } #[inline(always)] pub(crate) fn make_ro(&self) { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.make_ro() } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.make_ro() } _ => unreachable!(), } } #[inline(always)] #[cfg(test)] pub(crate) fn get_txid(&self) -> u64 { self.meta.get_txid() } #[inline(always)] pub(crate) fn is_leaf(&self) -> bool { self.meta.is_leaf() } #[allow(unused)] #[inline(always)] pub(crate) fn is_branch(&self) -> bool { self.meta.is_branch() } #[cfg(test)] pub(crate) fn tree_density(&self) -> (usize, usize) { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; (lref.count(), L_CAPACITY) } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; let mut lcount = 0; // leaf populated let mut mcount = 0; // leaf max possible for idx in 0..(bref.count() + 1) { let n = bref.nodes[idx] as *mut Node; let (l, m) = unsafe { (*n).tree_density() }; lcount += l; mcount += m; } (lcount, mcount) } _ => unreachable!(), } } /* pub(crate) fn leaf_count(&self) -> usize { match self.meta.0 & FLAG_MASK { FLAG_LEAF => 1, FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; let mut lcount = 0; // leaf count for idx in 0..(bref.count() + 1) { let n = bref.nodes[idx] as *mut Node; lcount += unsafe { (*n).leaf_count() }; } lcount } _ => unreachable!(), } } */ #[cfg(test)] #[inline(always)] pub(crate) fn get_ref(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord, { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.get_ref(k) } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.get_ref(k) } _ => { // println!("FLAGS: {:x}", self.meta.0); unreachable!() } } } #[inline(always)] pub(crate) fn min(&self) -> &K { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.min() } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.min() } _ => unreachable!(), } } #[inline(always)] pub(crate) fn max(&self) -> &K { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.max() } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.max() } _ => unreachable!(), } } #[inline(always)] pub(crate) fn verify(&self) -> bool { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.verify() } FLAG_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.verify() } _ => unreachable!(), } } #[cfg(test)] fn no_cycles_inner(&self, track: &mut BTreeSet<*const Self>) -> bool { match self.meta.0 & FLAG_MASK { FLAG_LEAF => { // check if we are in the set? track.insert(self as *const Self) } FLAG_BRANCH => { if track.insert(self as *const Self) { // check let bref = unsafe { &*(self as *const _ as *const Branch) }; for i in 0..(bref.count() + 1) { let n = bref.nodes[i]; let r = unsafe { (*n).no_cycles_inner(track) }; if !r { // panic!(); return false; } } true } else { // panic!(); false } } _ => { // println!("FLAGS: {:x}", self.meta.0); unreachable!() } } } #[cfg(test)] pub(crate) fn no_cycles(&self) -> bool { let mut track = BTreeSet::new(); self.no_cycles_inner(&mut track) } pub(crate) fn sblock_collect(&mut self, alloc: &mut Vec<*mut Node>) { // Reset our txid. // self.meta.0 &= FLAG_MASK | COUNT_MASK; // self.meta.0 |= txid << TXID_SHF; if (self.meta.0 & FLAG_MASK) == FLAG_BRANCH { let bref = unsafe { &*(self as *const _ as *const Branch) }; for idx in 0..(bref.count() + 1) { alloc.push(bref.nodes[idx]); let n = bref.nodes[idx]; unsafe { (*n).sblock_collect(alloc) }; } } } pub(crate) fn free(node: *mut Node) { let self_meta = self_meta!(node); match self_meta.0 & FLAG_MASK { FLAG_LEAF => Leaf::free(node as *mut Leaf), FLAG_BRANCH => Branch::free(node as *mut Branch), _ => unreachable!(), } } } impl Meta { #[inline(always)] fn set_count(&mut self, c: usize) { debug_assert!(c < 16); // Zero the bits in the flag from the count. self.0 &= FLAG_MASK | TXID_MASK; // Assign them. self.0 |= c as u8 as u64; } #[inline(always)] pub(crate) fn count(&self) -> usize { (self.0 & COUNT_MASK) as usize } #[inline(always)] fn add_count(&mut self, x: usize) { self.set_count(self.count() + x); } #[inline(always)] fn inc_count(&mut self) { debug_assert!(self.count() < 15); // Since count is the lowest bits, we can just inc // dec this as normal. self.0 += 1; } #[inline(always)] fn dec_count(&mut self) { debug_assert!(self.count() > 0); self.0 -= 1; } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { (self.0 & TXID_MASK) >> TXID_SHF } #[inline(always)] pub(crate) fn is_leaf(&self) -> bool { (self.0 & FLAG_MASK) == FLAG_LEAF } #[inline(always)] pub(crate) fn is_branch(&self) -> bool { (self.0 & FLAG_MASK) == FLAG_BRANCH } } impl Leaf { #[inline(always)] #[cfg(test)] fn set_count(&mut self, c: usize) { debug_assert_leaf!(self); self.meta.set_count(c) } #[inline(always)] pub(crate) fn count(&self) -> usize { debug_assert_leaf!(self); self.meta.count() } #[inline(always)] fn inc_count(&mut self) { debug_assert_leaf!(self); self.meta.inc_count() } #[inline(always)] fn dec_count(&mut self) { debug_assert_leaf!(self); self.meta.dec_count() } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { debug_assert_leaf!(self); self.meta.get_txid() } pub(crate) fn locate(&self, k: &Q) -> Result where K: Borrow, Q: Ord + ?Sized, { debug_assert_leaf!(self); key_search!(self, k) } pub(crate) fn get_ref(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord + ?Sized, { debug_assert_leaf!(self); key_search!(self, k) .ok() .map(|idx| unsafe { &*self.values[idx].as_ptr() }) } pub(crate) fn get_mut_ref(&mut self, k: &Q) -> Option<&mut V> where K: Borrow, Q: Ord + ?Sized, { debug_assert_leaf!(self); key_search!(self, k) .ok() .map(|idx| unsafe { &mut *self.values[idx].as_mut_ptr() }) } #[inline(always)] pub(crate) fn get_kv_idx_checked(&self, idx: usize) -> Option<(&K, &V)> { debug_assert_leaf!(self); if idx < self.count() { Some((unsafe { &*self.key[idx].as_ptr() }, unsafe { &*self.values[idx].as_ptr() })) } else { None } } pub(crate) fn min(&self) -> &K { debug_assert!(self.count() > 0); unsafe { &*self.key[0].as_ptr() } } pub(crate) fn max(&self) -> &K { debug_assert!(self.count() > 0); unsafe { &*self.key[self.count() - 1].as_ptr() } } pub(crate) fn min_value(&self) -> Option<(&K, &V)> { if self.count() > 0 { self.get_kv_idx_checked(0) } else { None } } pub(crate) fn max_value(&self) -> Option<(&K, &V)> { if self.count() > 0 { self.get_kv_idx_checked(self.count() - 1) } else { None } } pub(crate) fn req_clone(&self, txid: u64) -> Option<*mut Node> { debug_assert_leaf!(self); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); if self.get_txid() == txid { // Same txn, no action needed. None } else { debug_assert!(txid > self.get_txid()); // eprintln!("Req clone leaf"); // debug_assert!(false); // Diff txn, must clone. // # https://github.com/kanidm/concread/issues/55 // We flag the node as unable to drop it's internals. let new_txid = (self.meta.0 & (FLAG_MASK | COUNT_MASK)) | (txid << TXID_SHF) | FLAG_INVALID; let mut x: Box>> = Box::new(CachePadded::new(Leaf { // Need to preserve count. meta: Meta(new_txid), key: unsafe { MaybeUninit::uninit().assume_init() }, values: unsafe { MaybeUninit::uninit().assume_init() }, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); debug_assert!((x.meta.0 & FLAG_INVALID) != 0); // Copy in the values to the correct location. for idx in 0..self.count() { unsafe { let lkey = (*self.key[idx].as_ptr()).clone(); x.key[idx].as_mut_ptr().write(lkey); let lvalue = (*self.values[idx].as_ptr()).clone(); x.values[idx].as_mut_ptr().write(lvalue); } } // Finally undo the invalid flag to allow drop to proceed. x.meta.0 &= !FLAG_INVALID; debug_assert!((x.meta.0 & FLAG_INVALID) == 0); Some(Box::into_raw(x) as *mut Node) } } pub(crate) fn insert_or_update(&mut self, k: K, v: V) -> LeafInsertState { debug_assert_leaf!(self); // Find the location we need to update let r = key_search!(self, &k); match r { Ok(idx) => { // It exists at idx, replace let prev = unsafe { self.values[idx].as_mut_ptr().replace(v) }; // Prev now contains the original value, return it! LeafInsertState::Ok(Some(prev)) } Err(idx) => { if self.count() >= L_CAPACITY { // Overflow to a new node if idx >= self.count() { // Greate than all else, split right let rnode = Node::new_leaf_ins(self.meta.0, k, v); LeafInsertState::Split(rnode) } else if idx == 0 { // Lower than all else, split left. // let lnode = ...; let lnode = Node::new_leaf_ins(self.meta.0, k, v); LeafInsertState::RevSplit(lnode) } else { // Within our range, pop max, insert, and split // right. let pk = unsafe { slice_remove(&mut self.key, L_CAPACITY - 1).assume_init() }; let pv = unsafe { slice_remove(&mut self.values, L_CAPACITY - 1).assume_init() }; unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), idx); slice_insert(&mut self.values, MaybeUninit::new(v), idx); } let rnode = Node::new_leaf_ins(self.meta.0, pk, pv); LeafInsertState::Split(rnode) } } else { // We have space. unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), idx); slice_insert(&mut self.values, MaybeUninit::new(v), idx); } self.inc_count(); LeafInsertState::Ok(None) } } } } pub(crate) fn remove(&mut self, k: &Q) -> LeafRemoveState where K: Borrow, Q: Ord + ?Sized, { debug_assert_leaf!(self); if self.count() == 0 { return LeafRemoveState::Shrink(None); } // We must have a value - where are you .... match key_search!(self, k).ok() { // Count still greater than 0, so Ok and None, None => LeafRemoveState::Ok(None), Some(idx) => { // Get the kv out let _pk = unsafe { slice_remove(&mut self.key, idx).assume_init() }; let pv = unsafe { slice_remove(&mut self.values, idx).assume_init() }; self.dec_count(); if self.count() == 0 { LeafRemoveState::Shrink(Some(pv)) } else { LeafRemoveState::Ok(Some(pv)) } } } } pub(crate) fn take_from_l_to_r(&mut self, right: &mut Self) { debug_assert!(right.count() == 0); let count = self.count() / 2; let start_idx = self.count() - count; //move key and values unsafe { slice_move(&mut right.key, 0, &mut self.key, start_idx, count); slice_move(&mut right.values, 0, &mut self.values, start_idx, count); } // update the counts self.meta.set_count(start_idx); right.meta.set_count(count); } pub(crate) fn take_from_r_to_l(&mut self, right: &mut Self) { debug_assert!(self.count() == 0); let count = right.count() / 2; let start_idx = right.count() - count; // Move values from right to left. unsafe { slice_move(&mut self.key, 0, &mut right.key, 0, count); slice_move(&mut self.values, 0, &mut right.values, 0, count); } // Shift the values in right down. unsafe { ptr::copy( right.key.as_ptr().add(count), right.key.as_mut_ptr(), start_idx, ); ptr::copy( right.values.as_ptr().add(count), right.values.as_mut_ptr(), start_idx, ); } // Fix the counts. self.meta.set_count(count); right.meta.set_count(start_idx); } /* pub(crate) fn remove_lt(&mut self, k: &Q) -> LeafPruneState where K: Borrow, Q: Ord, { unimplemented!(); } */ #[inline(always)] pub(crate) fn make_ro(&self) { debug_assert_leaf!(self); /* let r = unsafe { mprotect( self as *const Leaf as *mut c_void, size_of::>(), PROT_READ ) }; assert!(r == 0); */ } #[inline(always)] pub(crate) fn merge(&mut self, right: &mut Self) { debug_assert_leaf!(self); debug_assert_leaf!(right); let sc = self.count(); let rc = right.count(); unsafe { slice_merge(&mut self.key, sc, &mut right.key, rc); slice_merge(&mut self.values, sc, &mut right.values, rc); } self.meta.add_count(right.count()); right.meta.set_count(0); } pub(crate) fn verify(&self) -> bool { debug_assert_leaf!(self); // println!("verify leaf -> {:?}", self); // Check key sorting if self.meta.count() == 0 { return true; } let mut lk: &K = unsafe { &*self.key[0].as_ptr() }; for work_idx in 1..self.meta.count() { let rk: &K = unsafe { &*self.key[work_idx].as_ptr() }; if lk >= rk { // println!("{:?}", self); if cfg!(test) { return false; } else { debug_assert!(false); } } lk = rk; } true } fn free(node: *mut Self) { unsafe { let _x: Box>> = Box::from_raw(node as *mut CachePadded>); } } } impl Debug for Leaf { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), Error> { debug_assert_leaf!(self); write!(f, "Leaf -> {}", self.count())?; #[cfg(all(test, not(miri)))] write!(f, " nid: {}", self.nid)?; write!(f, " \\-> [ ")?; for idx in 0..self.count() { write!(f, "{:?}, ", unsafe { &*self.key[idx].as_ptr() })?; } write!(f, " ]") } } impl Drop for Leaf { fn drop(&mut self) { debug_assert_leaf!(self); #[cfg(all(test, not(miri)))] release_nid(self.nid); // Due to the use of maybe uninit we have to drop any contained values. // https://github.com/kanidm/concread/issues/55 // if we are invalid, do NOT drop our internals as they MAY be inconsistent. // this WILL leak memory, but it's better than crashing. if self.meta.0 & FLAG_INVALID == 0 { unsafe { for idx in 0..self.count() { ptr::drop_in_place(self.key[idx].as_mut_ptr()); ptr::drop_in_place(self.values[idx].as_mut_ptr()); } } } // Done self.meta.0 = FLAG_DROPPED; debug_assert!(self.meta.0 & FLAG_MASK != FLAG_LEAF); // #[cfg(test)] // println!("set leaf {:?} to {:x}", self.nid, self.meta.0); } } impl Branch { #[allow(unused)] #[inline(always)] fn set_count(&mut self, c: usize) { debug_assert_branch!(self); self.meta.set_count(c) } #[inline(always)] pub(crate) fn count(&self) -> usize { debug_assert_branch!(self); self.meta.count() } #[inline(always)] fn inc_count(&mut self) { debug_assert_branch!(self); self.meta.inc_count() } #[inline(always)] fn dec_count(&mut self) { debug_assert_branch!(self); self.meta.dec_count() } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { debug_assert_branch!(self); self.meta.get_txid() } // Can't inline as this is recursive! pub(crate) fn min(&self) -> &K { debug_assert_branch!(self); unsafe { (*self.nodes[0]).min() } } // Can't inline as this is recursive! pub(crate) fn max(&self) -> &K { debug_assert_branch!(self); // Remember, self.count() is + 1 offset, so this gets // the max node unsafe { (*self.nodes[self.count()]).max() } } pub(crate) fn min_node(&self) -> *mut Node { self.nodes[0] } pub(crate) fn max_node(&self) -> *mut Node { self.nodes[self.count()] } pub(crate) fn req_clone(&self, txid: u64) -> Option<*mut Node> { debug_assert_branch!(self); if self.get_txid() == txid { // Same txn, no action needed. None } else { // println!("Req clone branch"); // Diff txn, must clone. // # https://github.com/kanidm/concread/issues/55 // We flag the node as unable to drop it's internals. let new_txid = (self.meta.0 & (FLAG_MASK | COUNT_MASK)) | (txid << TXID_SHF) | FLAG_INVALID; let mut x: Box>> = Box::new(CachePadded::new(Branch { // Need to preserve count. meta: Meta(new_txid), key: unsafe { MaybeUninit::uninit().assume_init() }, // We can simply clone the pointers. nodes: self.nodes, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); debug_assert!((x.meta.0 & FLAG_INVALID) != 0); // Copy in the keys to the correct location. for idx in 0..self.count() { unsafe { let lkey = (*self.key[idx].as_ptr()).clone(); x.key[idx].as_mut_ptr().write(lkey); } } // Finally undo the invalid flag to allow drop to proceed. x.meta.0 &= !FLAG_INVALID; debug_assert!((x.meta.0 & FLAG_INVALID) == 0); Some(Box::into_raw(x) as *mut Node) } } #[inline(always)] pub(crate) fn locate_node(&self, k: &Q) -> usize where K: Borrow, Q: Ord + ?Sized, { debug_assert_branch!(self); match key_search!(self, k) { Err(idx) => idx, Ok(idx) => idx + 1, } } #[inline(always)] pub(crate) fn get_idx_unchecked(&self, idx: usize) -> *mut Node { debug_assert_branch!(self); debug_assert!(idx <= self.count()); debug_assert!(!self.nodes[idx].is_null()); self.nodes[idx] } #[inline(always)] pub(crate) fn get_idx_checked(&self, idx: usize) -> Option<*mut Node> { debug_assert_branch!(self); // Remember, that nodes can have +1 to count which is why <= here, not <. if idx <= self.count() { debug_assert!(!self.nodes[idx].is_null()); Some(self.nodes[idx]) } else { None } } #[cfg(test)] pub(crate) fn get_ref(&self, k: &Q) -> Option<&V> where K: Borrow, Q: Ord, { debug_assert_branch!(self); // If the value is Ok(idx), then that means // we were located to the right node. This is because we // exactly hit and located on the key. // // If the value is Err(idx), then we have the exact index already. // as branches is of-by-one. let idx = self.locate_node(k); unsafe { (*self.nodes[idx]).get_ref(k) } } pub(crate) fn add_node(&mut self, node: *mut Node) -> BranchInsertState { debug_assert_branch!(self); // do we have space? if self.count() == L_CAPACITY { // if no space -> // split and send two nodes back for new branch // There are three possible states that this causes. // 1 * The inserted node is the greater than all current values, causing l(max, node) // to be returned. // 2 * The inserted node is between max - 1 and max, causing l(node, max) to be returned. // 3 * The inserted node is a low/middle value, causing max and max -1 to be returned. // let kr = unsafe { (*node).min() }; let r = key_search!(self, kr); let ins_idx = r.unwrap_err(); // Everything will pop max. let max = unsafe { *(self.nodes.get_unchecked(BV_CAPACITY - 1)) }; let res = match ins_idx { // Case 1 L_CAPACITY => { // println!("case 1"); // Greater than all current values, so we'll just return max and node. let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; // Now setup the ret val NOTICE compared to case 2 that we swap node and max? BranchInsertState::Split(max, node) } // Case 2 L_CAPACITY_N1 => { // println!("case 2"); // Greater than all but max, so we return max and node in the correct order. // Drop the key between them. let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; // Now setup the ret val NOTICE compared to case 1 that we swap node and max? BranchInsertState::Split(node, max) } // Case 3 ins_idx => { // Get the max - 1 and max nodes out. let maxn1 = unsafe { *(self.nodes.get_unchecked(BV_CAPACITY - 2)) }; // Drop the key between them. let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; // Drop the key before us that we are about to replace. let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 2)).assume_init() }; // Add node and it's key to the correct location. let k: K = kr.clone(); let leaf_ins_idx = ins_idx + 1; unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } BranchInsertState::Split(maxn1, max) } }; // Dec count as we always reduce branch by one as we split return // two. self.dec_count(); res } else { // if space -> // Get the nodes min-key - we clone it because we'll certainly be inserting it! let k: K = unsafe { (*node).min().clone() }; // bst and find when min-key < key[idx] let r = key_search!(self, &k); // if r is ever found, I think this is a bug, because we should never be able to // add a node with an existing min. // // [ 5 ] // / \ // [0,] [5,] // // So if we added here to [0, ], and it had to overflow to split, then everything // must be < 5. Why? Because to get to [0,] as your insert target, you must be < 5. // if we added to [5,] then a split must be greater than, or the insert would replace 5. // // if we consider // // [ 5 ] // / \ // [0,] [7,] // // Now we insert 5, and 7, splits. 5 would remain in the tree and we'd split 7 to the right // // As a result, any "Ok(idx)" must represent a corruption of the tree. // debug_assert!(r.is_err()); let ins_idx = r.unwrap_err(); let leaf_ins_idx = ins_idx + 1; // So why do we only need to insert right? Because the left-most // leaf when it grows, it splits to the right. That importantly // means that we only need to insert to replace the min and it's // right leaf, or anything higher. As a result, we are always // targetting ins_idx and leaf_ins_idx = ins_idx + 1. // // We have a situation like: // // [1, 3, 9, 18] // // and ins_idx is 2. IE: // // [1, 3, 9, 18] // ^-- k=6 // // So this we need to shift those r-> and insert. // // [1, 3, x, 9, 18] // ^-- k=6 // // [1, 3, 6, 9, 18] // // Now we need to consider the leaves too: // // [1, 3, 9, 18] // | | | | | // v v v v v // 0 1 3 9 18 // // So that means we need to move leaf_ins_idx = (ins_idx + 1) // right also // // [1, 3, x, 9, 18] // | | | | | | // v v v v v v // 0 1 3 x 9 18 // ^-- leaf for k=6 will go here. // // Now to talk about the right expand issue - lets say 0 conducted // a split, it returns the new right node - which would push // 3 to the right to insert a new right hand side as required. So we // really never need to consider the left most leaf to have to be // replaced in any conditions. // // Magic! unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } // finally update the count self.inc_count(); // Return that we are okay to go! BranchInsertState::Ok } } pub(crate) fn add_node_left( &mut self, lnode: *mut Node, sibidx: usize, ) -> BranchInsertState { debug_assert_branch!(self); if self.count() == L_CAPACITY { if sibidx == self.count() { // If sibidx == self.count, then we must be going into max - 1. // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, k5, xx ] // [ v1, v2, v3, v4, v5, v6, xx ] // // [ k1, k2, k3, k4, k5, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, v6, xx ] -> [ ln, v7 ] // // So in this case we drop k6, and return a split. let max = self.nodes[BV_CAPACITY - 1]; let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; self.dec_count(); BranchInsertState::Split(lnode, max) } else if sibidx == (self.count() - 1) { // If sibidx == (self.count - 1), then we must be going into max - 2 // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, dd, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // // // This means that we need to return v6,v7 in a split, and // just append node after v5. let maxn1 = self.nodes[BV_CAPACITY - 2]; let max = self.nodes[BV_CAPACITY - 1]; let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 2)).assume_init() }; self.dec_count(); self.dec_count(); // [ k1, k2, k3, k4, dd, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, xx, xx ] -> [ v6, v7 ] let k: K = unsafe { (*lnode).min().clone() }; unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), sibidx - 1); slice_insert(&mut self.nodes, lnode, sibidx); // slice_insert(&mut self.node, MaybeUninit::new(node), sibidx); } self.inc_count(); // // [ k1, k2, k3, k4, nk, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, ln, xx ] -> [ v6, v7 ] BranchInsertState::Split(maxn1, max) } else { // All other cases; // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, dd, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // // [ k1, k2, k3, nk, k4, dd ] [ k6 ] // [ v1, v2, v3, ln, v4, v5, xx ] -> [ v6, v7 ] // // This means that we need to return v6,v7 in a split,, drop k5, // then insert // Setup the nodes we intend to split away. let maxn1 = self.nodes[BV_CAPACITY - 2]; let max = self.nodes[BV_CAPACITY - 1]; let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 1)).assume_init() }; let _kdrop = unsafe { ptr::read(self.key.get_unchecked(L_CAPACITY - 2)).assume_init() }; self.dec_count(); self.dec_count(); // println!("pre-fixup -> {:?}", self); let sibnode = self.nodes[sibidx]; let nkey: K = unsafe { (*sibnode).min().clone() }; unsafe { slice_insert(&mut self.key, MaybeUninit::new(nkey), sibidx); slice_insert(&mut self.nodes, lnode, sibidx); } self.inc_count(); // println!("post fixup -> {:?}", self); BranchInsertState::Split(maxn1, max) } } else { // We have space, so just put it in! // [ k1, k2, k3, k4, xx, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, xx, xx ] // [ v1, v2, v3, ln, v4, v5, xx ] // // [ k1, k2, k3, nk, k4, xx ] // [ v1, v2, v3, ln, v4, v5, xx ] // let sibnode = self.nodes[sibidx]; let nkey: K = unsafe { (*sibnode).min().clone() }; unsafe { slice_insert(&mut self.nodes, lnode, sibidx); slice_insert(&mut self.key, MaybeUninit::new(nkey), sibidx); } self.inc_count(); // println!("post fixup -> {:?}", self); BranchInsertState::Ok } } fn remove_by_idx(&mut self, idx: usize) -> *mut Node { debug_assert_branch!(self); debug_assert!(idx <= self.count()); debug_assert!(idx > 0); // remove by idx. let _pk = unsafe { slice_remove(&mut self.key, idx - 1).assume_init() }; let pn = unsafe { slice_remove(&mut self.nodes, idx) }; self.dec_count(); pn } pub(crate) fn shrink_decision(&mut self, ridx: usize) -> BranchShrinkState { // Given two nodes, we need to decide what to do with them! // // Remember, this isn't happening in a vacuum. This is really a manipulation of // the following structure: // // parent (self) // / \ // left right // // We also need to consider the following situation too: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // Imagine we have exhausted r2, so we need to merge: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 <<-- r2 // // This leaves us with a partial state of // // root // / \ // lbranch rbranch (invalid!) // / \ / // l1 l2 r1 // // This means rbranch issues a cloneshrink to root. clone shrink must contain the remainer // so that it can be reparented: // // root // / // lbranch -- // / \ \ // l1 l2 r1 // // Now root has to shrink too. // // root -- // / \ \ // l1 l2 r1 // // So, we have to analyse the situation. // * Have left or right been emptied? (how to handle when branches) // * Is left or right belowe a reasonable threshold? // * Does the opposite have capacity to remain valid? debug_assert_branch!(self); debug_assert!(ridx > 0 && ridx <= self.count()); let left = self.nodes[ridx - 1]; let right = self.nodes[ridx]; debug_assert!(!left.is_null()); debug_assert!(!right.is_null()); match unsafe { (*left).meta.0 & FLAG_MASK } { FLAG_LEAF => { let lmut = leaf_ref!(left, K, V); let rmut = leaf_ref!(right, K, V); if lmut.count() + rmut.count() <= L_CAPACITY { lmut.merge(rmut); // remove the right node from parent let dnode = self.remove_by_idx(ridx); debug_assert!(dnode == right); if self.count() == 0 { // We now need to be merged across as we only contain a single // value now. BranchShrinkState::Shrink(dnode) } else { // We are complete! // #[cfg(test)] // println!("🔥 {:?}", rmut.nid); BranchShrinkState::Merge(dnode) } } else if rmut.count() > (L_CAPACITY / 2) { lmut.take_from_r_to_l(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else if lmut.count() > (L_CAPACITY / 2) { lmut.take_from_l_to_r(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else { // Do nothing BranchShrinkState::Balanced } } FLAG_BRANCH => { // right or left is now in a "corrupt" state with a single value that we need to relocate // to left - or we need to borrow from left and fix it! let lmut = branch_ref!(left, K, V); let rmut = branch_ref!(right, K, V); debug_assert!(rmut.count() == 0 || lmut.count() == 0); debug_assert!(rmut.count() <= L_CAPACITY || lmut.count() <= L_CAPACITY); // println!("{:?} {:?}", lmut.count(), rmut.count()); if lmut.count() == L_CAPACITY { lmut.take_from_l_to_r(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else if rmut.count() == L_CAPACITY { lmut.take_from_r_to_l(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else { // merge the right to tail of left. // println!("BL {:?}", lmut); // println!("BR {:?}", rmut); lmut.merge(rmut); // println!("AL {:?}", lmut); // println!("AR {:?}", rmut); // Reduce our count let dnode = self.remove_by_idx(ridx); debug_assert!(dnode == right); if self.count() == 0 { // We now need to be merged across as we also only contain a single // value now. BranchShrinkState::Shrink(dnode) } else { // We are complete! // #[cfg(test)] // println!("🚨 {:?}", rmut.nid); BranchShrinkState::Merge(dnode) } } } _ => unreachable!(), } } #[inline(always)] pub(crate) fn extract_last_node(&self) -> *mut Node { debug_assert_branch!(self); self.nodes[0] } pub(crate) fn rekey_by_idx(&mut self, idx: usize) { debug_assert_branch!(self); debug_assert!(idx <= self.count()); debug_assert!(idx > 0); // For the node listed, rekey it. let nref = self.nodes[idx]; let nkey = unsafe { ((*nref).min()).clone() }; unsafe { self.key[idx - 1].as_mut_ptr().write(nkey); } } #[inline(always)] pub(crate) fn merge(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); let sc = self.count(); let rc = right.count(); if rc == 0 { let node = right.nodes[0]; debug_assert!(!node.is_null()); let k: K = unsafe { (*node).min().clone() }; let ins_idx = self.count(); let leaf_ins_idx = ins_idx + 1; unsafe { slice_insert(&mut self.key, MaybeUninit::new(k), ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } self.inc_count(); } else { debug_assert!(sc == 0); unsafe { // Move all the nodes from right. slice_merge(&mut self.nodes, 1, &mut right.nodes, rc + 1); // Move the related keys. slice_merge(&mut self.key, 1, &mut right.key, rc); } // Set our count correctly. self.meta.set_count(rc + 1); // Set right len to 0 right.meta.set_count(0); // rekey the lowest pointer. unsafe { let nptr = self.nodes[1]; let k: K = (*nptr).min().clone(); self.key[0].as_mut_ptr().write(k); } // done! } } pub(crate) fn take_from_l_to_r(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); debug_assert!(self.count() > right.count()); // Starting index of where we move from. We work normally from a branch // with only zero (but the base) branch item, but we do the math anyway // to be sure incase we change later. // // So, self.len must be larger, so let's give a few examples here. // 4 = 7 - (7 + 0) / 2 (will move 4, 5, 6) // 3 = 6 - (6 + 0) / 2 (will move 3, 4, 5) // 3 = 5 - (5 + 0) / 2 (will move 3, 4) // 2 = 4 .... (will move 2, 3) // let count = (self.count() + right.count()) / 2; let start_idx = self.count() - count; // Move the remaining element from r to the correct location. // // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] -> [ v8, ------- ] // // To: // // [ k1, k2, k3, k4, k5, k6 ] [ --, --, --, --, ... // [ v1, v2, v3, v4, v5, v6, v7 ] -> [ --, --, --, v8, --, ... // unsafe { ptr::swap( right.nodes.get_unchecked_mut(0), right.nodes.get_unchecked_mut(count), ) } // Move our values from the tail. // We would move 3 now to: // // [ k1, k2, k3, k4, k5, k6 ] [ --, --, --, --, ... // [ v1, v2, v3, v4, --, --, -- ] -> [ v5, v6, v7, v8, --, ... // unsafe { slice_move(&mut right.nodes, 0, &mut self.nodes, start_idx + 1, count); } // Remove the keys from left. // So we need to remove the corresponding keys. so that we get. // // [ k1, k2, k3, --, --, -- ] [ --, --, --, --, ... // [ v1, v2, v3, v4, --, --, -- ] -> [ v5, v6, v7, v8, --, ... // // This means it's start_idx - 1 up to BK cap for kidx in (start_idx - 1)..L_CAPACITY { let _pk = unsafe { ptr::read(self.key.get_unchecked(kidx)).assume_init() }; // They are dropped now. } // Adjust both counts - we do this before rekey to ensure that the safety // checks hold in debugging. right.meta.set_count(count); self.meta.set_count(start_idx); // Rekey right for kidx in 1..(count + 1) { right.rekey_by_idx(kidx); } // Done! } pub(crate) fn take_from_r_to_l(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); debug_assert!(right.count() >= self.count()); let count = (self.count() + right.count()) / 2; let start_idx = right.count() - count; // We move count from right to left. unsafe { slice_move(&mut self.nodes, 1, &mut right.nodes, 0, count); } // Pop the excess keys in right // So say we had 6/7 in right, and 0/1 in left. // // We have a start_idx of 4, and count of 3. // // We moved 3 values from right, leaving 4. That means we need to remove // keys 0, 1, 2. The remaining keys are moved down. for kidx in 0..count { let _pk = unsafe { ptr::read(right.key.get_unchecked(kidx)).assume_init() }; // They are dropped now. } // move keys down in right unsafe { ptr::copy( right.key.as_ptr().add(count), right.key.as_mut_ptr(), start_idx, ); } // move nodes down in right unsafe { ptr::copy( right.nodes.as_ptr().add(count), right.nodes.as_mut_ptr(), start_idx + 1, ); } // update counts right.meta.set_count(start_idx); self.meta.set_count(count); // Rekey left for kidx in 1..(count + 1) { self.rekey_by_idx(kidx); } // Done! } #[inline(always)] pub(crate) fn replace_by_idx(&mut self, idx: usize, node: *mut Node) { debug_assert_branch!(self); debug_assert!(idx <= self.count()); debug_assert!(!self.nodes[idx].is_null()); self.nodes[idx] = node; } pub(crate) fn clone_sibling_idx( &mut self, txid: u64, idx: usize, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> usize { debug_assert_branch!(self); // if we clone, return Some new ptr. if not, None. let (ridx, idx) = if idx == 0 { // println!("clone_sibling_idx clone right"); // If we are 0 we clone our right sibling, // and return thet right idx as 1. (1, 1) } else { // println!("clone_sibling_idx clone left"); // Else we clone the left, and leave our index unchanged // as we are the right node. (idx, idx - 1) }; // Now clone the item at idx. debug_assert!(idx <= self.count()); let sib_ptr = self.nodes[idx]; debug_assert!(!sib_ptr.is_null()); // Do we need to clone? let res = match unsafe { (*sib_ptr).meta.0 & FLAG_MASK } { FLAG_LEAF => { let lref = unsafe { &*(sib_ptr as *const _ as *const Leaf) }; lref.req_clone(txid) } FLAG_BRANCH => { let bref = unsafe { &*(sib_ptr as *const _ as *const Branch) }; bref.req_clone(txid) } _ => unreachable!(), }; // If it did clone, it's a some, so we map that to have the from and new ptrs for // the memory management. if let Some(n_ptr) = res { // println!("ls push 101 {:?}", sib_ptr); first_seen.push(n_ptr); last_seen.push(sib_ptr); // Put the pointer in place. self.nodes[idx] = n_ptr; }; // Now return the right index ridx } /* pub(crate) fn trim_lt_key( &mut self, k: &Q, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> BranchTrimState where K: Borrow, Q: Ord, { debug_assert_branch!(self); // The possible states of a branch are // // [ 0, 4, 8, 12 ] // [n1, n2, n3, n4, n5] // let r = key_search!(self, k); let sc = self.count(); match r { Ok(idx) => { debug_assert!(idx < sc); // * A key matches exactly a value. IE k is 4. This means we can remove // n1 and n2 because we know 4 must be in n3 as the min. // NEED MM debug_assert!(false); unsafe { slice_slide_and_drop(&mut self.key, idx, sc - (idx + 1)); slice_slide(&mut self.nodes.as_mut(), idx, sc - idx); } self.meta.set_count(sc - (idx + 1)); if self.count() == 0 { let rnode = self.extract_last_node(); BranchTrimState::Promote(rnode) } else { BranchTrimState::Complete } } Err(idx) => { if idx == 0 { // * The key is less than min. IE it wants to remove the lowest value. // Check the "max" value of the subtree to know if we can proceed. let tnode: *mut Node = self.nodes[0]; let branch_k: &K = unsafe { (*tnode).max() }; if branch_k.borrow() < k { // Everything is smaller, let's remove it that subtree. // NEED MM debug_assert!(false); let _pk = unsafe { slice_remove(&mut self.key, 0).assume_init() }; let _pn = unsafe { slice_remove(self.nodes.as_mut(), 0) }; self.dec_count(); BranchTrimState::Complete } else { BranchTrimState::Complete } } else if idx >= self.count() { // remove everything except max. unsafe { // NEED MM debug_assert!(false); // We just drop all the keys. for kidx in 0..self.count() { ptr::drop_in_place(self.key[kidx].as_mut_ptr()); // ptr::drop_in_place(self.nodes[kidx].as_mut_ptr()); } // Move the last node to the bottom. self.nodes[0] = self.nodes[sc]; } self.meta.set_count(0); let rnode = self.extract_last_node(); // Something may still be valid, hand it on. BranchTrimState::Promote(rnode) } else { // * A key is between two values. We can remove everything less, but not // the assocated. For example, remove 6 would cause n1, n2 to be removed, but // the prune/walk will have to examine n3 to know about further changes. debug_assert!(idx > 0); let tnode: *mut Node = self.nodes[0]; let branch_k: &K = unsafe { (*tnode).max() }; if branch_k.borrow() < k { // NEED MM debug_assert!(false); // Remove including idx. unsafe { slice_slide_and_drop(&mut self.key, idx, sc - (idx + 1)); slice_slide(self.nodes.as_mut(), idx, sc - idx); } self.meta.set_count(sc - (idx + 1)); } else { // NEED MM debug_assert!(false); unsafe { slice_slide_and_drop(&mut self.key, idx - 1, sc - idx); slice_slide(self.nodes.as_mut(), idx - 1, sc - (idx - 1)); } self.meta.set_count(sc - idx); } if self.count() == 0 { // NEED MM debug_assert!(false); let rnode = self.extract_last_node(); BranchTrimState::Promote(rnode) } else { BranchTrimState::Complete } } } } } */ #[inline(always)] pub(crate) fn make_ro(&self) { debug_assert_branch!(self); /* let r = unsafe { mprotect( self as *const Branch as *mut c_void, size_of::>(), PROT_READ ) }; assert!(r == 0); */ } pub(crate) fn verify(&self) -> bool { debug_assert_branch!(self); if self.count() == 0 { // Not possible to be valid! debug_assert!(false); return false; } // println!("verify branch -> {:?}", self); // Check we are sorted. let mut lk: &K = unsafe { &*self.key[0].as_ptr() }; for work_idx in 1..self.count() { let rk: &K = unsafe { &*self.key[work_idx].as_ptr() }; // println!("{:?} >= {:?}", lk, rk); if lk >= rk { debug_assert!(false); return false; } lk = rk; } // Recursively call verify for work_idx in 0..self.count() { let node = unsafe { &*self.nodes[work_idx] }; if !node.verify() { for work_idx in 0..(self.count() + 1) { let nref = unsafe { &*self.nodes[work_idx] }; if !nref.verify() { // println!("Failed children"); debug_assert!(false); return false; } } } } // Check descendants are validly ordered. // V-- remember, there are count + 1 nodes. for work_idx in 0..self.count() { // get left max and right min let lnode = unsafe { &*self.nodes[work_idx] }; let rnode = unsafe { &*self.nodes[work_idx + 1] }; let pkey = unsafe { &*self.key[work_idx].as_ptr() }; let lkey = lnode.max(); let rkey = rnode.min(); if lkey >= pkey || pkey > rkey { // println!("++++++"); // println!("{:?} >= {:?}, {:?} > {:?}", lkey, pkey, pkey, rkey); // println!("out of order key found {}", work_idx); // println!("left --> {:?}", lnode); // println!("right -> {:?}", rnode); // println!("prnt -> {:?}", self); debug_assert!(false); return false; } } // All good! true } fn free(node: *mut Self) { unsafe { let mut _x: Box>> = Box::from_raw(node as *mut CachePadded>); } } } impl Debug for Branch { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), Error> { debug_assert_branch!(self); write!(f, "Branch -> {}", self.count())?; #[cfg(all(test, not(miri)))] write!(f, " nid: {}", self.nid)?; write!(f, " \\-> [ ")?; for idx in 0..self.count() { write!(f, "{:?}, ", unsafe { &*self.key[idx].as_ptr() })?; } write!(f, " ]") } } impl Drop for Branch { fn drop(&mut self) { debug_assert_branch!(self); #[cfg(all(test, not(miri)))] release_nid(self.nid); // Due to the use of maybe uninit we have to drop any contained values. // https://github.com/kanidm/concread/issues/55 // if we are invalid, do NOT drop our internals as they MAY be inconsistent. // this WILL leak memory, but it's better than crashing. if self.meta.0 & FLAG_INVALID == 0 { unsafe { for idx in 0..self.count() { ptr::drop_in_place(self.key[idx].as_mut_ptr()); } } } // Done self.meta.0 = FLAG_DROPPED; debug_assert!(self.meta.0 & FLAG_MASK != FLAG_BRANCH); // println!("set branch {:?} to {:x}", self.nid, self.meta.0); } } #[cfg(test)] mod tests { use super::*; #[test] fn test_bptree2_node_cache_size() { let ls = std::mem::size_of::>() - std::mem::size_of::(); let bs = std::mem::size_of::>() - std::mem::size_of::(); #[cfg(feature = "skinny")] { assert!(ls <= 64); assert!(bs <= 64); } #[cfg(not(feature = "skinny"))] { assert!(ls <= 128); assert!(bs <= 128); } } #[test] fn test_bptree2_node_test_weird_basics() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(leaf.get_txid() == 1); // println!("{:?}", leaf); leaf.set_count(1); assert!(leaf.count() == 1); leaf.set_count(0); assert!(leaf.count() == 0); leaf.inc_count(); leaf.inc_count(); leaf.inc_count(); assert!(leaf.count() == 3); leaf.dec_count(); leaf.dec_count(); leaf.dec_count(); assert!(leaf.count() == 0); /* let branch: *mut Branch = Node::new_branch(1, ptr::null_mut(), ptr::null_mut()); let branch = unsafe { &mut *branch }; assert!(branch.get_txid() == 1); // println!("{:?}", branch); branch.set_count(3); assert!(branch.count() == 3); branch.set_count(0); assert!(branch.count() == 0); Branch::free(branch as *mut _); */ Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_in_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(leaf.get_txid() == 1); // Check insert to capacity for kv in 0..L_CAPACITY { let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(&kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); // Check update to capacity for kv in 0..L_CAPACITY { let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(&kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_out_of_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(L_CAPACITY <= 8); let kvs = [7, 5, 1, 6, 2, 3, 0, 8]; assert!(leaf.get_txid() == 1); // Check insert to capacity for idx in 0..L_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(&kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == L_CAPACITY); // Check update to capacity for idx in 0..L_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(&kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == L_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_min() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(L_CAPACITY <= 8); let kvs = [3, 2, 6, 4, 5, 1, 9, 0]; let min = [3, 2, 2, 2, 2, 1, 1, 0]; for idx in 0..L_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(&kv) == Some(&kv)); assert!(leaf.min() == &min[idx]); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == L_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_max() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(L_CAPACITY <= 8); let kvs = [1, 3, 2, 6, 4, 5, 9, 0]; let max: [usize; 8] = [1, 3, 3, 6, 6, 6, 9, 9]; for idx in 0..L_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(&kv) == Some(&kv)); assert!(leaf.max() == &max[idx]); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == L_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_remove_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { leaf.insert_or_update(kv, kv); } // Remove all but one. for kv in 0..(L_CAPACITY - 1) { let r = leaf.remove(&kv); if let LeafRemoveState::Ok(Some(rkv)) = r { assert!(rkv == kv); } else { assert!(false); } } assert!(leaf.count() == 1); assert!(leaf.max() == &(L_CAPACITY - 1)); // Remove a non-existant value. let r = leaf.remove(&(L_CAPACITY + 20)); if let LeafRemoveState::Ok(None) = r { // Ok! } else { assert!(false); } // Finally clear the node, should request a shrink. let kv = L_CAPACITY - 1; let r = leaf.remove(&kv); if let LeafRemoveState::Shrink(Some(rkv)) = r { assert!(rkv == kv); } else { assert!(false); } assert!(leaf.count() == 0); // Remove non-existant post shrink. Should never happen // but safety first! let r = leaf.remove(&0); if let LeafRemoveState::Shrink(None) = r { // Ok! } else { assert!(false); } assert!(leaf.count() == 0); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_remove_out_of_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { leaf.insert_or_update(kv, kv); } let mid = L_CAPACITY / 2; // This test removes all BUT one node to keep the states simple. for kv in mid..(L_CAPACITY - 1) { let r = leaf.remove(&kv); match r { LeafRemoveState::Ok(_) => {} _ => panic!(), } } for kv in 0..(L_CAPACITY / 2) { let r = leaf.remove(&kv); match r { LeafRemoveState::Ok(_) => {} _ => panic!(), } } assert!(leaf.count() == 1); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_bptree2_node_leaf_insert_split() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { leaf.insert_or_update(kv + 10, kv + 10); } // Split right let r = leaf.insert_or_update(L_CAPACITY + 10, L_CAPACITY + 10); if let LeafInsertState::Split(rleaf) = r { unsafe { assert!((*rleaf).count() == 1); } Leaf::free(rleaf); } else { panic!(); } // Split left let r = leaf.insert_or_update(0, 0); if let LeafInsertState::RevSplit(lleaf) = r { unsafe { assert!((*lleaf).count() == 1); } Leaf::free(lleaf); } else { panic!(); } assert!(leaf.count() == L_CAPACITY); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } /* #[test] fn test_bptree_leaf_remove_lt() { // This is used in split off. // Remove none let leaf1: *mut Leaf = Node::new_leaf(1); let leaf1 = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { let _ = leaf1.insert_or_update(kv + 10, kv); } leaf1.remove_lt(&5); assert!(leaf1.count() == L_CAPACITY); Leaf::free(leaf1 as *mut _); // Remove all let leaf2: *mut Leaf = Node::new_leaf(1); let leaf2 = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { let _ = leaf2.insert_or_update(kv + 10, kv); } leaf2.remove_lt(&(L_CAPACITY + 10)); assert!(leaf2.count() == 0); Leaf::free(leaf2 as *mut _); // Remove from middle let leaf3: *mut Leaf = Node::new_leaf(1); let leaf3 = unsafe { &mut *leaf }; for kv in 0..L_CAPACITY { let _ = leaf3.insert_or_update(kv + 10, kv); } leaf3.remove_lt(&((L_CAPACITY / 2) + 10)); assert!(leaf3.count() == (L_CAPACITY / 2)); Leaf::free(leaf3 as *mut _); // Remove less than not in leaf. let leaf4: *mut Leaf = Node::new_leaf(1); let leaf4 = unsafe { &mut *leaf }; let _ = leaf4.insert_or_update(5, 5); let _ = leaf4.insert_or_update(15, 15); leaf4.remove_lt(&10); assert!(leaf4.count() == 1); // Add another and remove all. let _ = leaf4.insert_or_update(20, 20); leaf4.remove_lt(&25); assert!(leaf4.count() == 0); Leaf::free(leaf4 as *mut _); // Done! assert_released(); } */ /* ============================================ */ // Branch tests here! #[test] fn test_bptree2_node_branch_new() { // Create a new branch, and test it. let left: *mut Leaf = Node::new_leaf(1); let left_ref = unsafe { &mut *left }; let right: *mut Leaf = Node::new_leaf(1); let right_ref = unsafe { &mut *right }; // add kvs to l and r for kv in 0..L_CAPACITY { left_ref.insert_or_update(kv + 10, kv + 10); right_ref.insert_or_update(kv + 20, kv + 20); } // create branch let branch: *mut Branch = Node::new_branch( 1, left as *mut Node, right as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); // Test .min works on our descendants assert!(branch_ref.min() == &10); // Test .max works on our descendats. assert!(branch_ref.max() == &(20 + L_CAPACITY - 1)); // Get some k within the leaves. assert!(branch_ref.get_ref(&11) == Some(&11)); assert!(branch_ref.get_ref(&21) == Some(&21)); // get some k that is out of bounds. assert!(branch_ref.get_ref(&1).is_none()); assert!(branch_ref.get_ref(&100).is_none()); Leaf::free(left as *mut _); Leaf::free(right as *mut _); Branch::free(branch as *mut _); assert_released(); } // Helpers macro_rules! test_3_leaf { ($fun:expr) => {{ let a: *mut Leaf = Node::new_leaf(1); let b: *mut Leaf = Node::new_leaf(1); let c: *mut Leaf = Node::new_leaf(1); unsafe { (*a).insert_or_update(10, 10); (*b).insert_or_update(20, 20); (*c).insert_or_update(30, 30); } $fun(a, b, c); Leaf::free(a as *mut _); Leaf::free(b as *mut _); Leaf::free(c as *mut _); assert_released(); }}; } #[test] fn test_bptree2_node_branch_add_min() { // This pattern occurs with "revsplit" to help with reverse // ordered inserts. test_3_leaf!(|a, b, c| { // Add the max two to the branch let branch: *mut Branch = Node::new_branch( 1, b as *mut Node, c as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); // Now min node (uses a diff function!) let r = branch_ref.add_node_left(a as *mut Node, 0); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } #[test] fn test_bptree2_node_branch_add_mid() { test_3_leaf!(|a, b, c| { // Add the outer two to the branch let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, c as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); let r = branch_ref.add_node(b as *mut Node); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } #[test] fn test_bptree2_node_branch_add_max() { test_3_leaf!(|a, b, c| { // add the bottom two let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, b as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); let r = branch_ref.add_node(c as *mut Node); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } // Helpers macro_rules! test_max_leaf { ($fun:expr) => {{ let a: *mut Leaf = Node::new_leaf(1); let b: *mut Leaf = Node::new_leaf(1); let c: *mut Leaf = Node::new_leaf(1); let d: *mut Leaf = Node::new_leaf(1); #[cfg(not(feature = "skinny"))] let e: *mut Leaf = Node::new_leaf(1); #[cfg(not(feature = "skinny"))] let f: *mut Leaf = Node::new_leaf(1); #[cfg(not(feature = "skinny"))] let g: *mut Leaf = Node::new_leaf(1); #[cfg(not(feature = "skinny"))] let h: *mut Leaf = Node::new_leaf(1); unsafe { (*a).insert_or_update(10, 10); (*b).insert_or_update(20, 20); (*c).insert_or_update(30, 30); (*d).insert_or_update(40, 40); #[cfg(not(feature = "skinny"))] { (*e).insert_or_update(50, 50); (*f).insert_or_update(60, 60); (*g).insert_or_update(70, 70); (*h).insert_or_update(80, 80); } } let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, b as *mut Node, ); let branch_ref = unsafe { &mut *branch }; branch_ref.add_node(c as *mut Node); branch_ref.add_node(d as *mut Node); #[cfg(not(feature = "skinny"))] { branch_ref.add_node(e as *mut Node); branch_ref.add_node(f as *mut Node); branch_ref.add_node(g as *mut Node); branch_ref.add_node(h as *mut Node); } assert!(branch_ref.count() == L_CAPACITY); #[cfg(feature = "skinny")] $fun(branch_ref, 40); #[cfg(not(feature = "skinny"))] $fun(branch_ref, 80); // MUST NOT verify here, as it's a use after free of the tests inserted node! Branch::free(branch as *mut _); Leaf::free(a as *mut _); Leaf::free(b as *mut _); Leaf::free(c as *mut _); Leaf::free(d as *mut _); #[cfg(not(feature = "skinny"))] { Leaf::free(e as *mut _); Leaf::free(f as *mut _); Leaf::free(g as *mut _); Leaf::free(h as *mut _); } assert_released(); }}; } #[test] fn test_bptree2_node_branch_add_split_min() { // Used in rev split } #[test] fn test_bptree2_node_branch_add_split_mid() { test_max_leaf!(|branch_ref: &mut Branch, max: usize| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to L_CAPACITY, incs of 10 unsafe { (*node).insert_or_update(15, 15); }; // Add in the middle let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(x, y) => { unsafe { assert!((*x).min() == &(max - 10)); assert!((*y).min() == &max); } // X, Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } #[test] fn test_bptree2_node_branch_add_split_max() { test_max_leaf!(|branch_ref: &mut Branch, max: usize| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to L_CAPACITY, incs of 10 unsafe { (*node).insert_or_update(200, 200); }; // Add in at the end. let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(y, mynode) => { unsafe { // println!("{:?}", (*y).min()); // println!("{:?}", (*mynode).min()); assert!((*y).min() == &max); assert!((*mynode).min() == &200); } // Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } #[test] fn test_bptree2_node_branch_add_split_n1max() { // Add one before the end! test_max_leaf!(|branch_ref: &mut Branch, max: usize| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to L_CAPACITY, incs of 10 unsafe { (*node).insert_or_update(max - 5, max - 5); }; // Add in one before the end. let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(mynode, y) => { unsafe { assert!((*mynode).min() == &(max - 5)); assert!((*y).min() == &max); } // Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } } concread-0.4.6/src/internals/bptree/states.rs000064400000000000000000000045661046102023000173440ustar 00000000000000use super::node::{Leaf, Node}; use std::fmt::Debug; #[derive(Debug)] pub(crate) enum LeafInsertState where K: Ord + Clone + Debug, V: Clone, { Ok(Option), // Split(K, V), Split(*mut Leaf), // We split in the reverse direction. RevSplit(*mut Leaf), } #[derive(Debug)] pub(crate) enum LeafRemoveState where V: Clone, { Ok(Option), // Indicate that we found the associated value, but this // removal means we no longer exist so should be removed. Shrink(Option), } #[derive(Debug)] pub(crate) enum BranchInsertState where K: Ord + Clone + Debug, V: Clone, { Ok, // Two nodes that need addition to a new branch? Split(*mut Node, *mut Node), } #[derive(Debug)] pub(crate) enum BranchShrinkState where K: Ord + Clone + Debug, V: Clone, { Balanced, Merge(*mut Node), Shrink(*mut Node), } /* #[derive(Debug)] pub(crate) enum BranchTrimState where K: Ord + Clone + Debug, V: Clone, { Complete, Promote(*mut Node), } pub(crate) enum CRTrimState where K: Ord + Clone + Debug, V: Clone, { Complete, Clone(*mut Node), Promote(*mut Node), } */ #[derive(Debug)] pub(crate) enum CRInsertState where K: Ord + Clone + Debug, V: Clone, { // We did not need to clone, here is the result. NoClone(Option), // We had to clone the referenced node provided. Clone(Option, *mut Node), // We had to split, but did not need a clone. // REMEMBER: In all split cases it means the key MUST NOT have // previously existed, so it implies return none to the // caller. Split(*mut Node), RevSplit(*mut Node), // We had to clone and split. CloneSplit(*mut Node, *mut Node), CloneRevSplit(*mut Node, *mut Node), } #[derive(Debug)] pub(crate) enum CRCloneState where K: Ord + Clone + Debug, V: Clone, { Clone(*mut Node), NoClone, } #[derive(Debug)] pub(crate) enum CRRemoveState where K: Ord + Clone + Debug, V: Clone, { // We did not need to clone, here is the result. NoClone(Option), // We had to clone the referenced node provided. Clone(Option, *mut Node), // Shrink(Option), // CloneShrink(Option, *mut Node), } concread-0.4.6/src/internals/hashmap/cursor.rs000064400000000000000000002357731046102023000175240ustar 00000000000000//! The cursor is what actually knits a tree together from the parts //! we have, and has an important role to keep the system consistent. //! //! Additionally, the cursor also is responsible for general movement //! throughout the structure and how to handle that effectively use super::node::*; use std::borrow::Borrow; use std::fmt::Debug; use std::mem; #[cfg(feature = "ahash")] use ahash::RandomState; #[cfg(not(feature = "ahash"))] use std::collections::hash_map::RandomState; use std::hash::{BuildHasher, Hash, Hasher}; use super::iter::{Iter, KeyIter, ValueIter}; use super::states::*; use std::sync::Mutex; use crate::internals::lincowcell::LinCowCellCapable; /// A stored K/V in the hash bucket. #[derive(Clone)] pub struct Datum where K: Hash + Eq + Clone + Debug, V: Clone, { /// The K in K:V. pub k: K, /// The V in K:V. pub v: V, } /// The internal root of the tree, with associated garbage lists etc. #[derive(Debug)] pub(crate) struct SuperBlock where K: Hash + Eq + Clone + Debug, V: Clone, { root: *mut Node, size: usize, txid: u64, build_hasher: RandomState, } unsafe impl Send for SuperBlock { } unsafe impl Sync for SuperBlock { } impl LinCowCellCapable, CursorWrite> for SuperBlock { fn create_reader(&self) -> CursorRead { CursorRead::new(self) } fn create_writer(&self) -> CursorWrite { CursorWrite::new(self) } fn pre_commit( &mut self, mut new: CursorWrite, prev: &CursorRead, ) -> CursorRead { let mut prev_last_seen = prev.last_seen.lock().unwrap(); debug_assert!((*prev_last_seen).is_empty()); let new_last_seen = &mut new.last_seen; std::mem::swap(&mut (*prev_last_seen), &mut (*new_last_seen)); debug_assert!((*new_last_seen).is_empty()); // Now when the lock is dropped, both sides see the correct info and garbage for drops. // Clear first seen, we won't be dropping them from here. new.first_seen.clear(); self.root = new.root; self.size = new.length; self.txid = new.txid; // Create the new reader. CursorRead::new(self) } } impl SuperBlock { /// 🔥 🔥 🔥 pub unsafe fn new() -> Self { let leaf: *mut Leaf = Node::new_leaf(1); SuperBlock { root: leaf as *mut Node, size: 0, txid: 1, build_hasher: RandomState::new(), } } #[cfg(test)] pub(crate) fn new_test(txid: u64, root: *mut Node) -> Self { assert!(txid < (TXID_MASK >> TXID_SHF)); assert!(txid > 0); // Do a pre-verify to be sure it's sane. assert!(unsafe { (*root).verify() }); // Collect anythinng from root into this txid if needed. // Set txid to txid on all tree nodes from the root. // first_seen.push(root); // unsafe { (*root).sblock_collect(&mut first_seen) }; // Lock them all /* first_seen.iter().for_each(|n| unsafe { (**n).make_ro(); }); */ // Determine our count internally. let (size, _, _) = unsafe { (*root).tree_density() }; // Good to go! SuperBlock { txid, size, root, build_hasher: RandomState::new(), } } } #[derive(Debug)] pub(crate) struct CursorRead where K: Hash + Eq + Clone + Debug, V: Clone, { txid: u64, length: usize, root: *mut Node, last_seen: Mutex>>, build_hasher: RandomState, } unsafe impl Send for CursorRead { } unsafe impl Sync for CursorRead { } #[derive(Debug)] pub(crate) struct CursorWrite where K: Hash + Eq + Clone + Debug, V: Clone, { // Need to build a stack as we go - of what, I'm not sure ... txid: u64, length: usize, root: *mut Node, last_seen: Vec<*mut Node>, first_seen: Vec<*mut Node>, build_hasher: RandomState, } unsafe impl Send for CursorWrite { } unsafe impl Sync for CursorWrite { } pub(crate) trait CursorReadOps { #[allow(unused)] fn get_root_ref(&self) -> &Node; fn get_root(&self) -> *mut Node; fn len(&self) -> usize; #[allow(unused)] fn get_txid(&self) -> u64; fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized; #[cfg(test)] fn get_tree_density(&self) -> (usize, usize, usize) { // Walk the tree and calculate the packing effeciency. let rref = self.get_root_ref(); rref.tree_density() } fn search(&self, h: u64, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { let mut node = self.get_root(); for _i in 0..65536 { if unsafe { (*node).is_leaf() } { let lref = leaf_ref!(node, K, V); return lref.get_ref(h, k).map(|v| unsafe { // Strip the lifetime and rebind to the lifetime of `self`. // This is safe because we know that these nodes will NOT // be altered during the lifetime of this txn, so the references // will remain stable. let x = v as *const V; &*x as &V }); } else { let bref = branch_ref!(node, K, V); let idx = bref.locate_node(h); node = bref.get_idx_unchecked(idx); } } panic!("Tree depth exceeded max limit (65536). This may indicate memory corruption."); } #[allow(unused)] fn contains_key(&self, h: u64, k: &Q) -> bool where K: Borrow, Q: Hash + Eq + ?Sized, { self.search(h, k).is_some() } fn kv_iter(&self) -> Iter { Iter::new(self.get_root(), self.len()) } fn k_iter(&self) -> KeyIter { KeyIter::new(self.get_root(), self.len()) } fn v_iter(&self) -> ValueIter { ValueIter::new(self.get_root(), self.len()) } #[cfg(test)] fn verify(&self) -> bool { self.get_root_ref().no_cycles() && self.get_root_ref().verify() && { let (l, _, _) = self.get_tree_density(); l == self.len() } } } impl CursorWrite { pub(crate) fn new(sblock: &SuperBlock) -> Self { let txid = sblock.txid + 1; assert!(txid < (TXID_MASK >> TXID_SHF)); // println!("starting wr txid -> {:?}", txid); let length = sblock.size; let root = sblock.root; // TODO: Could optimise how big these are based // on past trends? Or based on % tree size? let last_seen = Vec::with_capacity(16); let first_seen = Vec::with_capacity(16); let build_hasher = sblock.build_hasher.clone(); CursorWrite { txid, length, root, last_seen, first_seen, build_hasher, } } pub(crate) fn clear(&mut self) { // Reset the values in this tree. // We need to mark everything as disposable, and create a new root! self.last_seen.push(self.root); unsafe { (*self.root).sblock_collect(&mut self.last_seen) }; let nroot: *mut Leaf = Node::new_leaf(self.txid); let mut nroot = nroot as *mut Node; self.first_seen.push(nroot); mem::swap(&mut self.root, &mut nroot); self.length = 0; } // Functions as insert_or_update pub(crate) fn insert(&mut self, h: u64, k: K, v: V) -> Option { let r = match clone_and_insert( self.root, self.txid, h, k, v, &mut self.last_seen, &mut self.first_seen, ) { CRInsertState::NoClone(res) => res, CRInsertState::Clone(res, mut nnode) => { // We have a new root node, swap it in. // !!! It's already been cloned and marked for cleaning by the clone_and_insert // call. mem::swap(&mut self.root, &mut nnode); // Return the insert result res } CRInsertState::CloneSplit(lnode, rnode) => { // The previous root had to split - make a new // root now and put it inplace. let mut nroot = Node::new_branch(self.txid, lnode, rnode) as *mut Node; self.first_seen.push(nroot); // The root was cloned as part of clone split // This swaps the POINTERS not the content! mem::swap(&mut self.root, &mut nroot); // As we split, there must NOT have been an existing // key to overwrite. None } CRInsertState::Split(rnode) => { // The previous root was already part of this txn, but has now // split. We need to construct a new root and swap them. // // Note, that we have to briefly take an extra RC on the root so // that we can get it into the branch. let mut nroot = Node::new_branch(self.txid, self.root, rnode) as *mut Node; self.first_seen.push(nroot); // println!("ls push 2"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); // As we split, there must NOT have been an existing // key to overwrite. None } CRInsertState::RevSplit(lnode) => { let mut nroot = Node::new_branch(self.txid, lnode, self.root) as *mut Node; self.first_seen.push(nroot); // println!("ls push 3"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); None } CRInsertState::CloneRevSplit(rnode, lnode) => { let mut nroot = Node::new_branch(self.txid, lnode, rnode) as *mut Node; self.first_seen.push(nroot); // root was cloned in the rev split // println!("ls push 4"); // self.last_seen.push(self.root); mem::swap(&mut self.root, &mut nroot); None } }; // If this is none, it means a new slot is now occupied. if r.is_none() { self.length += 1; } r } pub(crate) fn remove(&mut self, h: u64, k: &K) -> Option { let r = match clone_and_remove( self.root, self.txid, h, k, &mut self.last_seen, &mut self.first_seen, ) { CRRemoveState::NoClone(res) => res, CRRemoveState::Clone(res, mut nnode) => { mem::swap(&mut self.root, &mut nnode); res } CRRemoveState::Shrink(res) => { if self_meta!(self.root).is_leaf() { // No action - we have an empty tree. res } else { // Root is being demoted, get the last branch and // promote it to the root. self.last_seen.push(self.root); let rmut = branch_ref!(self.root, K, V); let mut pnode = rmut.extract_last_node(); mem::swap(&mut self.root, &mut pnode); res } } CRRemoveState::CloneShrink(res, mut nnode) => { if self_meta!(nnode).is_leaf() { // The tree is empty, but we cloned the root to get here. mem::swap(&mut self.root, &mut nnode); res } else { // Our root is getting demoted here, get the remaining branch self.last_seen.push(nnode); let rmut = branch_ref!(nnode, K, V); let mut pnode = rmut.extract_last_node(); // Promote it to the new root mem::swap(&mut self.root, &mut pnode); res } } }; if r.is_some() { self.length -= 1; } r } #[cfg(test)] pub(crate) fn path_clone(&mut self, h: u64) { match path_clone( self.root, self.txid, h, &mut self.last_seen, &mut self.first_seen, ) { CRCloneState::Clone(mut nroot) => { // We cloned the root, so swap it. mem::swap(&mut self.root, &mut nroot); } CRCloneState::NoClone => {} }; } pub(crate) fn get_mut_ref(&mut self, h: u64, k: &K) -> Option<&mut V> { match path_clone( self.root, self.txid, h, &mut self.last_seen, &mut self.first_seen, ) { CRCloneState::Clone(mut nroot) => { // We cloned the root, so swap it. mem::swap(&mut self.root, &mut nroot); } CRCloneState::NoClone => {} }; // Now get the ref. path_get_mut_ref(self.root, h, k) } /* pub(crate) unsafe fn get_slot_mut_ref(&mut self, h: u64) -> Option<&mut [Datum]> { match path_clone( self.root, self.txid, h, &mut self.last_seen, &mut self.first_seen, ) { CRCloneState::Clone(mut nroot) => { // We cloned the root, so swap it. mem::swap(&mut self.root, &mut nroot); } CRCloneState::NoClone => {} }; // Now get the ref. path_get_slot_mut_ref(self.root, h) } */ #[cfg(test)] pub(crate) fn root_txid(&self) -> u64 { self.get_root_ref().get_txid() } /* #[cfg(test)] pub(crate) fn tree_density(&self) -> (usize, usize, usize) { self.get_root_ref().tree_density() } */ } impl Extend<(K, V)> for CursorWrite { fn extend>(&mut self, iter: I) { iter.into_iter().for_each(|(k, v)| { let h = self.hash_key(&k); let _ = self.insert(h, k, v); }); } } impl Drop for CursorWrite { fn drop(&mut self) { // If there is content in first_seen, this means we aborted and must rollback // of these items! // println!("Releasing CW FS -> {:?}", self.first_seen); self.first_seen.iter().for_each(|n| Node::free(*n)) } } impl Drop for CursorRead { fn drop(&mut self) { // If there is content in last_seen, a future generation wants us to remove it! let last_seen_guard = self .last_seen .try_lock() .expect("Unable to lock, something is horridly wrong!"); last_seen_guard.iter().for_each(|n| Node::free(*n)); std::mem::drop(last_seen_guard); } } impl Drop for SuperBlock { fn drop(&mut self) { // eprintln!("Releasing SuperBlock ..."); // We must be the last SB and no txns exist. Drop the tree now. // TODO: Calc this based on size. let mut first_seen = Vec::with_capacity(16); // eprintln!("{:?}", self.root); first_seen.push(self.root); unsafe { (*self.root).sblock_collect(&mut first_seen) }; first_seen.iter().for_each(|n| Node::free(*n)); } } impl CursorRead { pub(crate) fn new(sblock: &SuperBlock) -> Self { // println!("starting rd txid -> {:?}", sblock.txid); let build_hasher = sblock.build_hasher.clone(); CursorRead { txid: sblock.txid, length: sblock.size, root: sblock.root, last_seen: Mutex::new(Vec::with_capacity(0)), build_hasher, } } } /* impl Drop for CursorRead { fn drop(&mut self) { unimplemented!(); } } */ impl CursorReadOps for CursorRead { fn get_root_ref(&self) -> &Node { unsafe { &*(self.root) } } fn get_root(&self) -> *mut Node { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { hash_key!(self, k) } } impl CursorReadOps for CursorWrite { fn get_root_ref(&self) -> &Node { unsafe { &*(self.root) } } fn get_root(&self) -> *mut Node { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { hash_key!(self, k) } } fn clone_and_insert( node: *mut Node, txid: u64, h: u64, k: K, v: V, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRInsertState { /* * Let's talk about the magic of this function. Come, join * me around the [🔥🔥🔥] * * This function is the heart and soul of a copy on write * structure - as we progress to the leaf location where we * wish to perform an alteration, we clone (if required) all * nodes on the path. This way an abort (rollback) of the * commit simply is to drop the cursor, where the "new" * cloned values are only referenced. To commit, we only need * to replace the tree root in the parent structures as * the cloned path must by definition include the root, and * will contain references to nodes that did not need cloning, * thus keeping them alive. */ if self_meta!(node).is_leaf() { // NOTE: We have to match, rather than map here, as rust tries to // move k:v into both closures! // Leaf path match leaf_ref!(node, K, V).req_clone(txid) { Some(cnode) => { // println!(); first_seen.push(cnode); // println!("ls push 5"); last_seen.push(node); // Clone was required. let mref = leaf_ref!(cnode, K, V); // insert to the new node. match mref.insert_or_update(h, k, v) { LeafInsertState::Ok(res) => CRInsertState::Clone(res, cnode), LeafInsertState::Split(rnode) => { first_seen.push(rnode as *mut Node); // let rnode = Node::new_leaf_ins(txid, sk, sv); CRInsertState::CloneSplit(cnode, rnode as *mut Node) } LeafInsertState::RevSplit(lnode) => { first_seen.push(lnode as *mut Node); CRInsertState::CloneRevSplit(cnode, lnode as *mut Node) } } } None => { // No clone required. // simply do the insert. let mref = leaf_ref!(node, K, V); match mref.insert_or_update(h, k, v) { LeafInsertState::Ok(res) => CRInsertState::NoClone(res), LeafInsertState::Split(rnode) => { // We split, but left is already part of the txn group, so lets // just return what's new. // let rnode = Node::new_leaf_ins(txid, sk, sv); first_seen.push(rnode as *mut Node); CRInsertState::Split(rnode as *mut Node) } LeafInsertState::RevSplit(lnode) => { first_seen.push(lnode as *mut Node); CRInsertState::RevSplit(lnode as *mut Node) } } } } // end match } else { // Branch path // Decide if we need to clone - we do this as we descend due to a quirk in Arc // get_mut, because we don't have access to get_mut_unchecked (and this api may // never be stabilised anyway). When we change this to *mut + garbage lists we // could consider restoring the reactive behaviour that clones up, rather than // cloning down the path. // // NOTE: We have to match, rather than map here, as rust tries to // move k:v into both closures! match branch_ref!(node, K, V).req_clone(txid) { Some(cnode) => { // first_seen.push(cnode); // println!("ls push 6"); last_seen.push(node); // Not same txn, clone instead. let nmref = branch_ref!(cnode, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_insert(anode, txid, h, k, v, last_seen, first_seen) { CRInsertState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); // Pass back up that we cloned. CRInsertState::Clone(res, cnode) } CRInsertState::CloneSplit(lnode, rnode) => { // CloneSplit here, would have already updated lnode/rnode into the // gc lists. // Second, we update anode_idx node with our lnode as the new clone. nmref.replace_by_idx(anode_idx, lnode); // Third we insert rnode - perfect world it's at anode_idx + 1, but // we use the normal insert routine for now. match nmref.add_node(rnode) { BranchInsertState::Ok => CRInsertState::Clone(None, cnode), BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::CloneSplit(cnode, nrnode as *mut Node) } } } CRInsertState::CloneRevSplit(nnode, lnode) => { nmref.replace_by_idx(anode_idx, nnode); match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::Clone(None, cnode), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::CloneSplit(cnode, nrnode as *mut Node) } } } CRInsertState::NoClone(_res) => { // If our descendant did not clone, then we don't have to either. unreachable!("Shoud never be possible."); // CRInsertState::NoClone(res) } CRInsertState::Split(_rnode) => { // I think unreachable!("This represents a corrupt tree state"); } CRInsertState::RevSplit(_lnode) => { unreachable!("This represents a corrupt tree state"); } } // end match } // end Some, None => { let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_insert(anode, txid, h, k, v, last_seen, first_seen) { CRInsertState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); // We did not clone, and no further work needed. CRInsertState::NoClone(res) } CRInsertState::NoClone(res) => { // If our descendant did not clone, then we don't have to do any adjustments // or further work. CRInsertState::NoClone(res) } CRInsertState::Split(rnode) => { match nmref.add_node(rnode) { // Similar to CloneSplit - we are either okay, and the insert was happy. BranchInsertState::Ok => CRInsertState::NoClone(None), // Or *we* split as well, and need to return a new sibling branch. BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::Split(nrnode as *mut Node) } } } CRInsertState::CloneSplit(lnode, rnode) => { // work inplace. // Second, we update anode_idx node with our lnode as the new clone. nmref.replace_by_idx(anode_idx, lnode); // Third we insert rnode - perfect world it's at anode_idx + 1, but // we use the normal insert routine for now. match nmref.add_node(rnode) { // Similar to CloneSplit - we are either okay, and the insert was happy. BranchInsertState::Ok => CRInsertState::NoClone(None), // Or *we* split as well, and need to return a new sibling branch. BranchInsertState::Split(clnode, crnode) => { // Create a new branch to hold these children. let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); // Return it CRInsertState::Split(nrnode as *mut Node) } } } CRInsertState::RevSplit(lnode) => match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::NoClone(None), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::Split(nrnode as *mut Node) } }, CRInsertState::CloneRevSplit(nnode, lnode) => { nmref.replace_by_idx(anode_idx, nnode); match nmref.add_node_left(lnode, anode_idx) { BranchInsertState::Ok => CRInsertState::NoClone(None), BranchInsertState::Split(clnode, crnode) => { let nrnode = Node::new_branch(txid, clnode, crnode); first_seen.push(nrnode as *mut Node); CRInsertState::Split(nrnode as *mut Node) } } } } // end match } } // end match branch ref clone } // end if leaf } fn path_clone( node: *mut Node, txid: u64, h: u64, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRCloneState { if unsafe { (*node).is_leaf() } { unsafe { (*(node as *mut Leaf)) .req_clone(txid) .map(|cnode| { // Track memory last_seen.push(node); // println!("ls push 7 {:?}", node); first_seen.push(cnode); CRCloneState::Clone(cnode) }) .unwrap_or(CRCloneState::NoClone) } } else { // We are in a branch, so locate our descendent and prepare // to clone if needed. // println!("txid -> {:?} {:?}", node_txid, txid); let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); match path_clone(anode, txid, h, last_seen, first_seen) { CRCloneState::Clone(cnode) => { // Do we need to clone? nmref .req_clone(txid) .map(|acnode| { // We require to be cloned. last_seen.push(node); // println!("ls push 8"); first_seen.push(acnode); let nmref = branch_ref!(acnode, K, V); nmref.replace_by_idx(anode_idx, cnode); CRCloneState::Clone(acnode) }) .unwrap_or_else(|| { // Nope, just insert and unwind. nmref.replace_by_idx(anode_idx, cnode); CRCloneState::NoClone }) } CRCloneState::NoClone => { // Did not clone, unwind. CRCloneState::NoClone } } } } fn clone_and_remove( node: *mut Node, txid: u64, h: u64, k: &K, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> CRRemoveState { if self_meta!(node).is_leaf() { leaf_ref!(node, K, V) .req_clone(txid) .map(|cnode| { first_seen.push(cnode); // println!("ls push 10 {:?}", node); last_seen.push(node); let mref = leaf_ref!(cnode, K, V); match mref.remove(h, k) { LeafRemoveState::Ok(res) => CRRemoveState::Clone(res, cnode), LeafRemoveState::Shrink(res) => CRRemoveState::CloneShrink(res, cnode), } }) .unwrap_or_else(|| { let mref = leaf_ref!(node, K, V); match mref.remove(h, k) { LeafRemoveState::Ok(res) => CRRemoveState::NoClone(res), LeafRemoveState::Shrink(res) => CRRemoveState::Shrink(res), } }) } else { // Locate the node we need to work on and then react if it // requests a shrink. branch_ref!(node, K, V) .req_clone(txid) .map(|cnode| { first_seen.push(cnode); // println!("ls push 11 {:?}", node); last_seen.push(node); // Done mm let nmref = branch_ref!(cnode, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_remove(anode, txid, h, k, last_seen, first_seen) { CRRemoveState::NoClone(_res) => { unreachable!("Should never occur"); // CRRemoveState::NoClone(res) } CRRemoveState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); CRRemoveState::Clone(res, cnode) } CRRemoveState::Shrink(_res) => { unreachable!("This represents a corrupt tree state"); } CRRemoveState::CloneShrink(res, nnode) => { // Put our cloned child into the tree at the correct location, don't worry, // the shrink_decision will deal with it. nmref.replace_by_idx(anode_idx, nnode); // Now setup the sibling, to the left *or* right. let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); // Okay, now work out what we need to do. match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::Clone(res, cnode) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // println!("ls push 20 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Clone(res, cnode) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values. // println!("ls push 21 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::CloneShrink(res, cnode) } } } } }) .unwrap_or_else(|| { // We are already part of this txn let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); match clone_and_remove(anode, txid, h, k, last_seen, first_seen) { CRRemoveState::NoClone(res) => CRRemoveState::NoClone(res), CRRemoveState::Clone(res, lnode) => { nmref.replace_by_idx(anode_idx, lnode); CRRemoveState::NoClone(res) } CRRemoveState::Shrink(res) => { let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::NoClone(res) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // // A quirk here is based on how clone_sibling_idx works. We may actually // start with anode_idx of 0, which triggers a right clone, so it's // *already* in the mm lists. But here right is "last seen" now if // // println!("ls push 22 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::NoClone(res) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values, so we begin to shrink up. // println!("ls push 23 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Shrink(res) } } } CRRemoveState::CloneShrink(res, nnode) => { // We don't need to clone, just work on the nmref we have. // // Swap in the cloned node to the correct location. nmref.replace_by_idx(anode_idx, nnode); // Now setup the sibling, to the left *or* right. let right_idx = nmref.clone_sibling_idx(txid, anode_idx, last_seen, first_seen); match nmref.shrink_decision(right_idx) { BranchShrinkState::Balanced => { // K:V were distributed through left and right, // so no further action needed. CRRemoveState::NoClone(res) } BranchShrinkState::Merge(dnode) => { // Right was merged to left, and we remain // valid // println!("ls push 24 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::NoClone(res) } BranchShrinkState::Shrink(dnode) => { // Right was merged to left, but we have now falled under the needed // amount of values. // println!("ls push 25 {:?}", dnode); debug_assert!(!last_seen.contains(&dnode)); last_seen.push(dnode); CRRemoveState::Shrink(res) } } } } }) // end unwrap_or_else } } fn path_get_mut_ref<'a, K, V>(node: *mut Node, h: u64, k: &K) -> Option<&'a mut V> where K: Clone + Hash + Eq + Debug + 'a, V: Clone, { if self_meta!(node).is_leaf() { leaf_ref!(node, K, V).get_mut_ref(h, k) } else { // This nmref binds the life of the reference ... let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); // That we get here. So we can't just return it, and we need to 'strip' the // lifetime so that it's bound to the lifetime of the outer node // rather than the nmref. let r: Option<*mut V> = path_get_mut_ref(anode, h, k).map(|v| v as *mut V); // I solemly swear I am up to no good. r.map(|v| unsafe { &mut *v as &mut V }) } } /* unsafe fn path_get_slot_mut_ref<'a, K: Clone + Hash + Eq + Debug, V: Clone>( node: *mut Node, h: u64, ) -> Option<&'a mut [Datum]> where K: 'a, { if self_meta!(node).is_leaf() { leaf_ref!(node, K, V).get_slot_mut_ref(h) } else { // This nmref binds the life of the reference ... let nmref = branch_ref!(node, K, V); let anode_idx = nmref.locate_node(h); let anode = nmref.get_idx_unchecked(anode_idx); // That we get here. So we can't just return it, and we need to 'strip' the // lifetime so that it's bound to the lifetime of the outer node // rather than the nmref. let r: Option<*mut [Datum]> = path_get_slot_mut_ref(anode, h).map(|v| v as *mut [Datum]); // I solemly swear I am up to no good. r.map(|v| &mut *v as &mut [Datum]) } } */ #[cfg(test)] mod tests { use super::super::node::*; use super::super::states::*; use super::SuperBlock; use super::{CursorRead, CursorReadOps}; use crate::internals::lincowcell::LinCowCellCapable; use rand::seq::SliceRandom; use std::mem; fn create_leaf_node(v: usize) -> *mut Node { let node = Node::new_leaf(1); { let nmut: &mut Leaf<_, _> = leaf_ref!(node, usize, usize); nmut.insert_or_update(v as u64, v, v); } node as *mut Node } fn create_leaf_node_full(vbase: usize) -> *mut Node { assert!(vbase % 10 == 0); let node = Node::new_leaf(1); { let nmut = leaf_ref!(node, usize, usize); for idx in 0..H_CAPACITY { let v = vbase + idx; nmut.insert_or_update(v as u64, v, v); } // println!("lnode full {:?} -> {:?}", vbase, nmut); } node as *mut Node } fn create_branch_node_full(vbase: usize) -> *mut Node { let l1 = create_leaf_node(vbase); let l2 = create_leaf_node(vbase + 10); let lbranch = Node::new_branch(1, l1, l2); let bref = branch_ref!(lbranch, usize, usize); for i in 2..HBV_CAPACITY { let l = create_leaf_node(vbase + (10 * i)); let r = bref.add_node(l); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), } } assert!(bref.slots() == H_CAPACITY); lbranch as *mut Node } #[test] fn test_hashmap2_cursor_insert_leaf() { // First create the node + cursor let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); let prev_txid = wcurs.root_txid(); // Now insert - the txid should be different. let r = wcurs.insert(1, 1, 1); assert!(r.is_none()); let r1_txid = wcurs.root_txid(); assert!(r1_txid == prev_txid + 1); // Now insert again - the txid should be the same. let r = wcurs.insert(2, 2, 2); assert!(r.is_none()); let r2_txid = wcurs.root_txid(); assert!(r2_txid == r1_txid); // The clones worked as we wanted! assert!(wcurs.verify()); } #[test] fn test_hashmap2_cursor_insert_split_1() { // Given a leaf at max, insert such that: // // leaf // // leaf -> split leaf // // // root // / \ // leaf split leaf // // It's worth noting that this is testing the CloneSplit path // as leaf needs a clone AND to split to achieve the new root. let node = create_leaf_node_full(10); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); let prev_txid = wcurs.root_txid(); let r = wcurs.insert(1, 1, 1); assert!(r.is_none()); let r1_txid = wcurs.root_txid(); assert!(r1_txid == prev_txid + 1); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_2() { // Similar to split_1, but test the Split only path. This means // leaf needs to be below max to start, and we insert enough in-txn // to trigger a clone of leaf AND THEN to cause the split. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(H_CAPACITY + 1) { // println!("ITER v {}", v); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_3() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split inbetween l/sl (new txn) let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // println!("{:?}", wcurs); let r = wcurs.insert(19, 19, 19); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_4() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split of sl (new txn) // let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(29, 29, 29); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_5() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split inbetween l/sl (same txn) // let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Now insert to trigger the needed actions. // Remember, we only need H_CAPACITY because there is already a // value in the leaf. for idx in 0..(H_CAPACITY) { let v = 10 + 1 + idx; let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_6() { // root // / \ // leaf split leaf // ^ // \----- nnode // // Check leaf split of sl (same txn) // let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Now insert to trigger the needed actions. // Remember, we only need H_CAPACITY because there is already a // value in the leaf. for idx in 0..(H_CAPACITY) { let v = 20 + 1 + idx; let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_7() { // root // / \ // leaf split leaf // Insert to leaf then split leaf such that root has cloned // in step 1, but doesn't need clone in 2. let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(11, 11, 11); assert!(r.is_none()); assert!(wcurs.verify()); let r = wcurs.insert(21, 21, 21); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_split_8() { // root // / \ // leaf split leaf // ^ ^ // \---- nnode 1 \----- nnode 2 // // Check double leaf split of sl (same txn). This is to // take the clonesplit path in the branch case where branch already // cloned. // let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); let r = wcurs.insert(19, 19, 19); assert!(r.is_none()); assert!(wcurs.verify()); let r = wcurs.insert(29, 29, 29); assert!(r.is_none()); assert!(wcurs.verify()); // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_stress_1() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(H_CAPACITY << 4) { // println!("ITER v {}", v); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_stress_2() { // Insert descending let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in (1..(H_CAPACITY << 4)).rev() { // println!("ITER v {}", v); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_stress_3() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(H_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } // Add transaction-ised versions. #[test] fn test_hashmap2_cursor_insert_stress_4() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in 1..(H_CAPACITY << 4) { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_stress_5() { // Insert descending let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in (1..(H_CAPACITY << 4)).rev() { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_insert_stress_6() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(H_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let mut sb = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); for v in ins.into_iter() { let mut wcurs = sb.create_writer(); let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. mem::drop(rdr); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_search_1() { let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(H_CAPACITY << 4) { let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); let r = wcurs.search(v as u64, &v); assert!(r.unwrap() == &v); } for v in 1..(H_CAPACITY << 4) { let r = wcurs.search(v as u64, &v); assert!(r.unwrap() == &v); } // On shutdown, check we dropped all as needed. mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_length_1() { // Check the length is consistent on operations. let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); for v in 1..(H_CAPACITY << 4) { let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); } // println!("{} == {}", wcurs.len(), H_CAPACITY << 4); assert!(wcurs.len() == H_CAPACITY << 4); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_01_p0() { // Check that a single value can be removed correctly without change. // Check that a missing value is removed as "None". // Check that emptying the root is ok. // BOTH of these need new txns to check clone, and then re-use txns. // // let lnode = create_leaf_node_full(0); let sb = SuperBlock::new_test(1, lnode); let mut wcurs = sb.create_writer(); // println!("{:?}", wcurs); for v in 0..H_CAPACITY { let x = wcurs.remove(v as u64, &v); // println!("{:?}", wcurs); assert!(x == Some(v)); } for v in 0..H_CAPACITY { let x = wcurs.remove(v as u64, &v); assert!(x.is_none()); } mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_01_p1() { let node = create_leaf_node(0); let sb = SuperBlock::new_test(1, node); let mut wcurs = sb.create_writer(); let _ = wcurs.remove(0, &0); // println!("{:?}", wcurs); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_02() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (new txn) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); println!("{:?}", wcurs); assert!(wcurs.verify()); wcurs.remove(20, &20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_03() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "leaf" and merge right (really left, but you know ...). (new txn) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(30); let root = Node::new_branch(0, lnode, rnode); // Prevent the tree shrinking. unsafe { (*root).add_node(znode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_04p0() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup sibling leaf to already be cloned. wcurs.path_clone(10); assert!(wcurs.verify()); wcurs.remove(20, &20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_04p1() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "split leaf" and merge left. (leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(0); let root = Node::new_branch(0, znode, lnode); // Prevent the tree shrinking. unsafe { (*root).add_node(rnode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup leaf to already be cloned. wcurs.path_clone(20); assert!(wcurs.verify()); wcurs.remove(20, &20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_05() { // Given the tree: // // root // / \ // leaf split leaf // // Remove from "leaf" and merge 'right'. (split leaf cloned already) let lnode = create_leaf_node(10); let rnode = create_leaf_node(20); let znode = create_leaf_node(30); let root = Node::new_branch(0, lnode, rnode); // Prevent the tree shrinking. unsafe { (*root).add_node(znode) }; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); // Setup leaf to already be cloned. wcurs.path_clone(20); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_06() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - new // // when remove from rbranch, mergc left to lbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(30, &30); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_07() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - new // // when remove from lbranch, merge right to rbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_08() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - full // rbranch - 2node // txn - new // // when remove from rbranch, borrow from lbranch // will NOT reduce height let lbranch = create_branch_node_full(0); let r1 = create_leaf_node(80); let r2 = create_leaf_node(90); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(80, &80); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_09() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - full // txn - new // // when remove from lbranch, borrow from rbranch // will NOT reduce height let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let lbranch = Node::new_branch(0, l1, l2); let rbranch = create_branch_node_full(100); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_10() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - touch lbranch // // when remove from rbranch, mergc left to lbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(0); wcurs.path_clone(10); wcurs.remove(30, &30); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_11() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - 2node // txn - touch rbranch // // when remove from lbranch, merge right to rbranch. // should cause tree height reduction. let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let r1 = create_leaf_node(20); let r2 = create_leaf_node(30); let lbranch = Node::new_branch(0, l1, l2); let rbranch = Node::new_branch(0, r1, r2); let root: *mut Branch = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _); let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(20); wcurs.path_clone(30); wcurs.remove(0, &0); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_12() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - full // rbranch - 2node // txn - touch lbranch // // when remove from rbranch, borrow from lbranch // will NOT reduce height let lbranch = create_branch_node_full(0); let r1 = create_leaf_node(80); let r2 = create_leaf_node(90); let rbranch = Node::new_branch(0, r1, r2); let root = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _) as *mut Node; // let count = HBV_CAPACITY + 2; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.path_clone(0); wcurs.path_clone(10); wcurs.path_clone(20); wcurs.remove(90, &90); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_13() { // Given the tree: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // conditions: // lbranch - 2node // rbranch - full // txn - touch rbranch // // when remove from lbranch, borrow from rbranch // will NOT reduce height let l1 = create_leaf_node(0); let l2 = create_leaf_node(10); let lbranch = Node::new_branch(0, l1, l2); let rbranch = create_branch_node_full(100); let root = Node::new_branch(0, lbranch as *mut _, rbranch as *mut _) as *mut Node; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); for i in 0..HBV_CAPACITY { let k = 100 + (10 * i); wcurs.path_clone(k as u64); } assert!(wcurs.verify()); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_14() { // Test leaf borrow left let lnode = create_leaf_node_full(10); let rnode = create_leaf_node(20); let root = Node::new_branch(0, lnode, rnode) as *mut Node; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(20, &20); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_15() { // Test leaf borrow right. let lnode = create_leaf_node(10) as *mut Node; let rnode = create_leaf_node_full(20) as *mut Node; let root = Node::new_branch(0, lnode, rnode) as *mut Node; let sb = SuperBlock::new_test(1, root as *mut Node); let mut wcurs = sb.create_writer(); assert!(wcurs.verify()); wcurs.remove(10, &10); assert!(wcurs.verify()); mem::drop(wcurs); mem::drop(sb); assert_released(); } fn tree_create_rand() -> (SuperBlock, CursorRead) { let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(H_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let mut sb = unsafe { SuperBlock::new() }; let rdr = sb.create_reader(); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.insert(v as u64, v, v); assert!(r.is_none()); assert!(wcurs.verify()); } let rdr = sb.pre_commit(wcurs, &rdr); (sb, rdr) } #[test] fn test_hashmap2_cursor_remove_stress_1() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in 1..(H_CAPACITY << 4) { // println!("-- ITER v {}", v); let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // On shutdown, check we dropped all as needed. let rdr2 = sb.pre_commit(wcurs, &rdr); std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_stress_2() { // Insert descending let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in (1..(H_CAPACITY << 4)).rev() { // println!("ITER v {}", v); let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. let rdr2 = sb.pre_commit(wcurs, &rdr); std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_stress_3() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(H_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let (mut sb, rdr) = tree_create_rand(); let mut wcurs = sb.create_writer(); for v in ins.into_iter() { let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); } // println!("{:?}", wcurs); // println!("DENSITY -> {:?}", wcurs.get_tree_density()); // On shutdown, check we dropped all as needed. let rdr2 = sb.pre_commit(wcurs, &rdr); std::mem::drop(rdr2); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } // Add transaction-ised versions. #[test] fn test_hashmap2_cursor_remove_stress_4() { // Insert ascending - we want to ensure the tree is a few levels deep // so we do this to a reasonable number. let (mut sb, mut rdr) = tree_create_rand(); for v in 1..(H_CAPACITY << 4) { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_stress_5() { // Insert descending let (mut sb, mut rdr) = tree_create_rand(); for v in (1..(H_CAPACITY << 4)).rev() { let mut wcurs = sb.create_writer(); // println!("ITER v {}", v); let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. // mem::drop(node); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } #[test] fn test_hashmap2_cursor_remove_stress_6() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..(H_CAPACITY << 4)).collect(); ins.shuffle(&mut rng); let (mut sb, mut rdr) = tree_create_rand(); for v in ins.into_iter() { let mut wcurs = sb.create_writer(); let r = wcurs.remove(v as u64, &v); assert!(r == Some(v)); assert!(wcurs.verify()); rdr = sb.pre_commit(wcurs, &rdr); } // println!("{:?}", node); // On shutdown, check we dropped all as needed. // mem::drop(node); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } /* #[test] #[cfg_attr(miri, ignore)] fn test_hashmap2_cursor_remove_stress_7() { // Insert random let mut rng = rand::thread_rng(); let mut ins: Vec = (1..10240).collect(); let node: *mut Leaf = Node::new_leaf(0); let mut wcurs = CursorWrite::new_test(1, node as *mut _); wcurs.extend(ins.iter().map(|v| (*v, *v))); ins.shuffle(&mut rng); let compacts = 0; for v in ins.into_iter() { let r = wcurs.remove(&v); assert!(r == Some(v)); assert!(wcurs.verify()); // let (l, m) = wcurs.tree_density(); // if l > 0 && (m / l) > 1 { // compacts += 1; // } } println!("compacts {:?}", compacts); } */ /* #[test] fn test_bptree_cursor_get_mut_ref_1() { // Test that we can clone a path (new txn) // Test that we don't re-clone. let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut wcurs = CursorWrite::new(root, 0); assert!(wcurs.verify()); let r1 = wcurs.get_mut_ref(&10); std::mem::drop(r1); let r1 = wcurs.get_mut_ref(&10); std::mem::drop(r1); } */ } concread-0.4.6/src/internals/hashmap/iter.rs000064400000000000000000000274601046102023000171420ustar 00000000000000//! Iterators for the map. // Iterators for the bptree use super::node::{Branch, Leaf, Meta, Node}; use std::collections::VecDeque; use std::fmt::Debug; use std::hash::Hash; use std::marker::PhantomData; pub(crate) struct LeafIter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { length: Option, // idx: usize, stack: VecDeque<(*mut Node, usize)>, phantom_k: PhantomData<&'a K>, phantom_v: PhantomData<&'a V>, } impl LeafIter<'_, K, V> { pub(crate) fn new(root: *mut Node, size_hint: bool) -> Self { let length = if size_hint { Some(unsafe { (*root).leaf_count() }) } else { None }; // We probably need to position the VecDeque here. let mut stack = VecDeque::new(); let mut work_node = root; loop { stack.push_back((work_node, 0)); if self_meta!(work_node).is_leaf() { break; } else { work_node = branch_ref!(work_node, K, V).get_idx_unchecked(0); } } LeafIter { length, // idx: 0, stack, phantom_k: PhantomData, phantom_v: PhantomData, } } #[cfg(test)] pub(crate) fn new_base() -> Self { LeafIter { length: None, // idx: 0, stack: VecDeque::new(), phantom_k: PhantomData, phantom_v: PhantomData, } } pub(crate) fn stack_position(&mut self, idx: usize) { // Get the current branch, it must the the back. if let Some((bref, bpidx)) = self.stack.back() { let wbranch = branch_ref!(*bref, K, V); if let Some(node) = wbranch.get_idx_checked(idx) { // Insert as much as possible now. First insert // our current idx, then all the 0, idxs. let mut work_node = node; let mut work_idx = idx; loop { self.stack.push_back((work_node, work_idx)); if self_meta!(work_node).is_leaf() { break; } else { work_idx = 0; work_node = branch_ref!(work_node, K, V).get_idx_unchecked(work_idx); } } } else { // Unwind further. let bpidx = *bpidx + 1; let _ = self.stack.pop_back(); self.stack_position(bpidx) } } // Must have been none, so we are exhausted. This means // the stack is empty, so return. } /* fn peek(&mut self) -> Option<&Leaf> { // I have no idea how peekable works, yolo. self.stack.back().map(|t| t.0.as_leaf()) } */ } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for LeafIter<'a, K, V> { type Item = &'a Leaf; fn next(&mut self) -> Option { // base case is the vecdeque is empty let (leafref, parent_idx) = match self.stack.pop_back() { Some(lr) => lr, None => return None, }; // Setup the veqdeque for the next iteration. self.stack_position(parent_idx + 1); // Return the leaf as we found at the start, regardless of the // stack operations. Some(leaf_ref!(leafref, K, V)) } fn size_hint(&self) -> (usize, Option) { match self.length { Some(l) => (l, Some(l)), // We aren't (shouldn't) be estimating None => (0, None), } } } /// Iterator over references to Key Value pairs stored in the map. pub struct Iter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { length: usize, slot_idx: usize, bk_idx: usize, curleaf: Option<&'a Leaf>, leafiter: LeafIter<'a, K, V>, } impl Iter<'_, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { let mut liter = LeafIter::new(root, false); let leaf = liter.next(); // We probably need to position the VecDeque here. Iter { length, slot_idx: 0, bk_idx: 0, curleaf: leaf, leafiter: liter, } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for Iter<'a, K, V> { type Item = (&'a K, &'a V); /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { if let Some(leaf) = self.curleaf { if let Some(r) = leaf.get_kv_idx_checked(self.slot_idx, self.bk_idx) { self.bk_idx += 1; Some(r) } else { // Are we partway in a bucket? if self.bk_idx > 0 { // It's probably ended, next slot. self.slot_idx += 1; self.bk_idx = 0; self.next() } else { // We've exhasuted the slots sink bk_idx == 0 was empty. self.curleaf = self.leafiter.next(); self.slot_idx = 0; self.bk_idx = 0; self.next() } } } else { None } } /// Provide a hint as to the number of items this iterator will yield. fn size_hint(&self) -> (usize, Option) { (self.length, Some(self.length)) } } /// Iterater over references to Keys stored in the map. pub struct KeyIter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { iter: Iter<'a, K, V>, } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> KeyIter<'a, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { KeyIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for KeyIter<'a, K, V> { type Item = &'a K; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(k, _)| k) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } /// Iterater over references to Values stored in the map. pub struct ValueIter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { iter: Iter<'a, K, V>, } impl ValueIter<'_, K, V> { pub(crate) fn new(root: *mut Node, length: usize) -> Self { ValueIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for ValueIter<'a, K, V> { type Item = &'a V; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(_, v)| v) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } #[cfg(test)] mod tests { use super::super::cursor::SuperBlock; use super::super::node::{Branch, Leaf, Node, H_CAPACITY}; use super::{Iter, LeafIter}; fn create_leaf_node_full(vbase: usize) -> *mut Node { assert!(vbase % 10 == 0); let node = Node::new_leaf(0); { let nmut = leaf_ref!(node, usize, usize); for idx in 0..H_CAPACITY { let v = vbase + idx; nmut.insert_or_update(v as u64, v, v); } } node as *mut _ } #[test] fn test_hashmap2_iter_leafiter_1() { let test_iter: LeafIter = LeafIter::new_base(); assert!(test_iter.count() == 0); } #[test] fn test_hashmap2_iter_leafiter_2() { let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, true); assert!(test_iter.size_hint() == (1, Some(1))); let lref = test_iter.next().unwrap(); assert!(lref.min() == 10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_hashmap2_iter_leafiter_3() { let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, true); assert!(test_iter.size_hint() == (2, Some(2))); let lref = test_iter.next().unwrap(); let rref = test_iter.next().unwrap(); assert!(lref.min() == 10); assert!(rref.min() == 20); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_hashmap2_iter_leafiter_4() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let mut test_iter: LeafIter = LeafIter::new(root as *mut _, true); assert!(test_iter.size_hint() == (4, Some(4))); let l1ref = test_iter.next().unwrap(); let r1ref = test_iter.next().unwrap(); let l2ref = test_iter.next().unwrap(); let r2ref = test_iter.next().unwrap(); assert!(l1ref.min() == 10); assert!(r1ref.min() == 20); assert!(l2ref.min() == 30); assert!(r2ref.min() == 40); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_hashmap2_iter_leafiter_5() { let lnode = create_leaf_node_full(10); let mut test_iter = LeafIter::new(lnode, true); assert!(test_iter.size_hint() == (1, Some(1))); let lref = test_iter.next().unwrap(); assert!(lref.min() == 10); assert!(test_iter.next().is_none()); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, lnode as *mut _); } #[test] fn test_hashmap2_iter_iter_1() { // Make a tree let lnode = create_leaf_node_full(10); let rnode = create_leaf_node_full(20); let root = Node::new_branch(0, lnode, rnode); let test_iter: Iter = Iter::new(root as *mut _, H_CAPACITY * 2); assert!(test_iter.size_hint() == (H_CAPACITY * 2, Some(H_CAPACITY * 2))); assert!(test_iter.count() == H_CAPACITY * 2); // Iterate! // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } #[test] fn test_hashmap2_iter_iter_2() { let l1node = create_leaf_node_full(10); let r1node = create_leaf_node_full(20); let l2node = create_leaf_node_full(30); let r2node = create_leaf_node_full(40); let b1node = Node::new_branch(0, l1node, r1node); let b2node = Node::new_branch(0, l2node, r2node); let root: *mut Branch = Node::new_branch(0, b1node as *mut _, b2node as *mut _); let test_iter: Iter = Iter::new(root as *mut _, H_CAPACITY * 4); // println!("{:?}", test_iter.size_hint()); assert!(test_iter.size_hint() == (H_CAPACITY * 4, Some(H_CAPACITY * 4))); assert!(test_iter.count() == H_CAPACITY * 4); // This drops everything. let _sb: SuperBlock = SuperBlock::new_test(1, root as *mut _); } } concread-0.4.6/src/internals/hashmap/macros.rs000064400000000000000000000020371046102023000174540ustar 00000000000000macro_rules! hash_key { ($self:expr, $k:expr) => {{ let mut hasher = $self.build_hasher.build_hasher(); $k.hash(&mut hasher); hasher.finish() }}; } macro_rules! debug_assert_leaf { ($x:expr) => {{ debug_assert!(unsafe { $x.ctrl.a.0.is_leaf() }); }}; } macro_rules! debug_assert_branch { ($x:expr) => {{ debug_assert!(unsafe { $x.ctrl.a.0.is_branch() }); }}; } macro_rules! self_meta { ($x:expr) => {{ #[allow(unused_unsafe)] unsafe { &mut *($x as *mut Meta) } }}; } macro_rules! branch_ref { ($x:expr, $k:ty, $v:ty) => {{ #[allow(unused_unsafe)] unsafe { debug_assert!(unsafe { (*$x).ctrl.a.0.is_branch() }); &mut *($x as *mut Branch<$k, $v>) } }}; } macro_rules! leaf_ref { ($x:expr, $k:ty, $v:ty) => {{ #[allow(unused_unsafe)] unsafe { debug_assert!(unsafe { (*$x).ctrl.a.0.is_leaf() }); &mut *($x as *mut Leaf<$k, $v>) } }}; } concread-0.4.6/src/internals/hashmap/mod.rs000064400000000000000000000022201046102023000167410ustar 00000000000000//! HashMap - A concurrently readable HashMap //! //! This is a specialisation of the `BptreeMap`, allowing a concurrently readable //! HashMap. Unlike a traditional hashmap it does *not* have `O(1)` lookup, as it //! internally uses a tree-like structure to store a series of buckets. However //! if you do not need key-ordering, due to the storage of the hashes as `u64` //! the operations in the tree to seek the bucket is much faster than the use of //! the same key in the `BptreeMap`. //! //! For more details. see the [BptreeMap](crate::bptree::BptreeMap) //! //! This structure is very different to the `im` crate. The `im` crate is //! sync + send over individual operations. This means that multiple writes can //! be interleaved atomicly and safely, and the readers always see the latest //! data. While this is potentially useful to a set of problems, transactional //! structures are suited to problems where readers have to maintain consistent //! data views for a duration of time, cpu cache friendly behaviours and //! database like transaction properties (ACID). #[macro_use] mod macros; pub mod cursor; pub mod iter; mod node; mod simd; mod states; concread-0.4.6/src/internals/hashmap/node.rs000064400000000000000000002742471046102023000171330ustar 00000000000000use super::cursor::Datum; use super::simd::*; use super::states::*; use crate::utils::*; use crossbeam_utils::CachePadded; use std::borrow::Borrow; use std::fmt::{self, Debug, Error}; use std::hash::Hash; use std::marker::PhantomData; use std::mem::ManuallyDrop; use std::mem::MaybeUninit; use std::ptr; use smallvec::SmallVec; #[cfg(feature = "simd_support")] use core_simd::u64x8; #[cfg(test)] use std::collections::BTreeSet; #[cfg(all(test, not(miri)))] use std::sync::atomic::{AtomicUsize, Ordering}; #[cfg(all(test, not(miri)))] use std::sync::Mutex; pub(crate) const TXID_MASK: u64 = 0x0fff_ffff_ffff_fff0; const FLAG_MASK: u64 = 0xf000_0000_0000_0000; const COUNT_MASK: u64 = 0x0000_0000_0000_000f; pub(crate) const TXID_SHF: usize = 4; // const FLAG__BRANCH: u64 = 0x1000_0000_0000_0000; // const FLAG__LEAF: u64 = 0x2000_0000_0000_0000; const FLAG_HASH_LEAF: u64 = 0x4000_0000_0000_0000; const FLAG_HASH_BRANCH: u64 = 0x8000_0000_0000_0000; const FLAG_DROPPED: u64 = 0xeeee_ffff_aaaa_bbbb; #[cfg(all(test, not(miri)))] const FLAG_POISON: u64 = 0xabcd_abcd_abcd_abcd; pub(crate) const H_CAPACITY: usize = 7; const H_CAPACITY_N1: usize = H_CAPACITY - 1; pub(crate) const HBV_CAPACITY: usize = H_CAPACITY + 1; const DEFAULT_BUCKET_ALLOC: usize = 1; #[cfg(not(feature = "simd_support"))] #[allow(non_camel_case_types)] pub struct u64x8 { _data: [u64; 8], } #[cfg(not(feature = "simd_support"))] impl u64x8 { #[inline(always)] fn from_array(data: [u64; 8]) -> Self { Self { _data: data } } } #[cfg(all(test, not(miri)))] thread_local!(static NODE_COUNTER: AtomicUsize = const { AtomicUsize::new(1) }); #[cfg(all(test, not(miri)))] thread_local!(static ALLOC_LIST: Mutex> = const { Mutex::new(BTreeSet::new()) }); #[cfg(all(test, not(miri)))] fn alloc_nid() -> usize { let nid: usize = NODE_COUNTER.with(|nc| nc.fetch_add(1, Ordering::AcqRel)); #[cfg(all(test, not(miri)))] { ALLOC_LIST.with(|llist| llist.lock().unwrap().insert(nid)); } // eprintln!("Allocate -> {:?}", nid); nid } #[cfg(all(test, not(miri)))] fn release_nid(nid: usize) { // println!("Release -> {:?}", nid); // debug_assert!(nid != 3); { let r = ALLOC_LIST.with(|llist| llist.lock().unwrap().remove(&nid)); assert!(r); } } #[cfg(test)] pub(crate) fn assert_released() { #[cfg(not(miri))] { let is_empt = ALLOC_LIST.with(|llist| { let x = llist.lock().unwrap(); // println!("Remaining -> {:?}", x); x.is_empty() }); assert!(is_empt); } } #[repr(C)] pub(crate) struct Meta(u64); pub(crate) union Ctrl { pub a: ManuallyDrop<(Meta, [u64; H_CAPACITY])>, pub simd: ManuallyDrop, } #[repr(C, align(64))] pub(crate) struct Branch where K: Hash + Eq + Clone + Debug, V: Clone, { pub ctrl: Ctrl, #[cfg(all(test, not(miri)))] poison: u64, nodes: [*mut Node; HBV_CAPACITY], #[cfg(all(test, not(miri)))] pub(crate) nid: usize, } type Bucket = SmallVec<[Datum; DEFAULT_BUCKET_ALLOC]>; #[repr(C, align(64))] pub(crate) struct Leaf where K: Hash + Eq + Clone + Debug, V: Clone, { pub ctrl: Ctrl, #[cfg(all(test, not(miri)))] poison: u64, pub values: [MaybeUninit>; H_CAPACITY], #[cfg(all(test, not(miri)))] pub nid: usize, } #[repr(C, align(64))] pub(crate) struct Node where K: Hash + Eq + Clone + Debug, V: Clone, { pub(crate) ctrl: Ctrl, k: PhantomData, v: PhantomData, } unsafe impl Send for Node { } unsafe impl Sync for Node { } impl Node { pub(crate) fn new_leaf(txid: u64) -> *mut Leaf { // println!("Req new hash leaf"); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); let x: Box>> = Box::new(CachePadded::new(Leaf { ctrl: Ctrl { simd: ManuallyDrop::new(u64x8::from_array([ (txid << TXID_SHF) | FLAG_HASH_LEAF, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, ])), }, #[cfg(all(test, not(miri)))] poison: FLAG_POISON, values: unsafe { MaybeUninit::uninit().assume_init() }, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); Box::into_raw(x) as *mut Leaf } fn new_leaf_bk(flags: u64, h: u64, bk: Bucket) -> *mut Leaf { // println!("Req new hash leaf ins"); // debug_assert!(false); debug_assert!((flags & FLAG_MASK) == FLAG_HASH_LEAF); let x: Box>> = Box::new(CachePadded::new(Leaf { // Let the flag, txid and the slots of value 1 through. ctrl: Ctrl { simd: ManuallyDrop::new(u64x8::from_array([ flags & (TXID_MASK | FLAG_MASK | 1), h, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, ])), }, #[cfg(all(test, not(miri)))] poison: FLAG_POISON, values: [ MaybeUninit::new(bk), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), MaybeUninit::uninit(), ], #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); Box::into_raw(x) as *mut Leaf } fn new_leaf_ins(flags: u64, h: u64, k: K, v: V) -> *mut Leaf { Self::new_leaf_bk(flags, h, smallvec![Datum { k, v }]) } pub(crate) fn new_branch( txid: u64, l: *mut Node, r: *mut Node, ) -> *mut Branch { // println!("Req new branch"); debug_assert!(!l.is_null()); debug_assert!(!r.is_null()); debug_assert!(unsafe { (*l).verify() }); debug_assert!(unsafe { (*r).verify() }); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); let pivot = unsafe { (*r).min() }; let x: Box>> = Box::new(CachePadded::new(Branch { // This sets the default (key) slots to 1, since we take an l/r ctrl: Ctrl { simd: ManuallyDrop::new(u64x8::from_array([ (txid << TXID_SHF) | FLAG_HASH_BRANCH | 1, pivot, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, ])), }, #[cfg(all(test, not(miri)))] poison: FLAG_POISON, nodes: [ l, r, ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ptr::null_mut(), ], #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); let b = Box::into_raw(x) as *mut Branch; debug_assert!(unsafe { (*b).verify() }); b } #[inline(always)] #[cfg(test)] pub(crate) fn get_txid(&self) -> u64 { unsafe { self.ctrl.a.0.get_txid() } } #[inline(always)] pub(crate) fn is_leaf(&self) -> bool { unsafe { self.ctrl.a.0.is_leaf() } } #[inline(always)] #[allow(unused)] pub(crate) fn is_branch(&self) -> bool { unsafe { self.ctrl.a.0.is_branch() } } #[cfg(test)] pub(crate) fn tree_density(&self) -> (usize, usize, usize) { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; (lref.count(), lref.slots(), H_CAPACITY) } FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; let mut lcount = 0; // leaf count let mut lslots = 0; // leaf populated slots let mut mslots = 0; // leaf max possible for idx in 0..(bref.slots() + 1) { let n = bref.nodes[idx] as *mut Node; let (c, l, m) = unsafe { (*n).tree_density() }; lcount += c; lslots += l; mslots += m; } (lcount, lslots, mslots) } _ => unreachable!(), } } pub(crate) fn leaf_count(&self) -> usize { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => 1, FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; let mut lcount = 0; // leaf count for idx in 0..(bref.slots() + 1) { let n = bref.nodes[idx]; lcount += unsafe { (*n).leaf_count() }; } lcount } _ => unreachable!(), } } #[cfg(test)] #[inline(always)] pub(crate) fn get_ref(&self, h: u64, k: &Q) -> Option<&V> where K: Borrow, Q: Eq, { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.get_ref(h, k) } FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.get_ref(h, k) } _ => { // println!("FLAGS: {:x}", self.meta.0); unreachable!() } } } #[inline(always)] pub(crate) fn min(&self) -> u64 { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.min() } FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.min() } _ => unreachable!(), } } #[inline(always)] pub(crate) fn max(&self) -> u64 { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.max() } FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.max() } _ => unreachable!(), } } #[inline(always)] pub(crate) fn verify(&self) -> bool { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(self as *const _ as *const Leaf) }; lref.verify() } FLAG_HASH_BRANCH => { let bref = unsafe { &*(self as *const _ as *const Branch) }; bref.verify() } _ => unreachable!(), } } #[cfg(test)] fn no_cycles_inner(&self, track: &mut BTreeSet<*const Self>) -> bool { match unsafe { self.ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { // check if we are in the set? track.insert(self as *const Self) } FLAG_HASH_BRANCH => { if track.insert(self as *const Self) { // check let bref = unsafe { &*(self as *const _ as *const Branch) }; for i in 0..(bref.slots() + 1) { let n = bref.nodes[i]; let r = unsafe { (*n).no_cycles_inner(track) }; if !r { // panic!(); return false; } } true } else { // panic!(); false } } _ => { // println!("FLAGS: {:x}", self.meta.0); unreachable!() } } } #[cfg(test)] pub(crate) fn no_cycles(&self) -> bool { let mut track = BTreeSet::new(); self.no_cycles_inner(&mut track) } pub(crate) fn sblock_collect(&mut self, alloc: &mut Vec<*mut Node>) { // Reset our txid. // self.meta.0 &= FLAG_MASK | COUNT_MASK; // self.meta.0 |= txid << TXID_SHF; if (unsafe { self.ctrl.a.0 .0 } & FLAG_MASK) == FLAG_HASH_BRANCH { let bref = unsafe { &*(self as *const _ as *const Branch) }; for idx in 0..(bref.slots() + 1) { alloc.push(bref.nodes[idx]); let n = bref.nodes[idx]; unsafe { (*n).sblock_collect(alloc) }; } } } pub(crate) fn free(node: *mut Node) { let self_meta = self_meta!(node); match self_meta.0 & FLAG_MASK { FLAG_HASH_LEAF => Leaf::free(node as *mut Leaf), FLAG_HASH_BRANCH => Branch::free(node as *mut Branch), _ => unreachable!(), } } } impl Meta { #[inline(always)] fn set_slots(&mut self, c: usize) { debug_assert!(c < 16); // Zero the bits in the flag from the slots. self.0 &= FLAG_MASK | TXID_MASK; // Assign them. self.0 |= c as u8 as u64; } #[inline(always)] pub(crate) fn slots(&self) -> usize { (self.0 & COUNT_MASK) as usize } #[inline(always)] fn add_slots(&mut self, x: usize) { self.set_slots(self.slots() + x); } #[inline(always)] fn inc_slots(&mut self) { debug_assert!(self.slots() < 15); // Since slots is the lowest bits, we can just inc // dec this as normal. self.0 += 1; } #[inline(always)] fn dec_slots(&mut self) { debug_assert!(self.slots() > 0); self.0 -= 1; } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { (self.0 & TXID_MASK) >> TXID_SHF } #[inline(always)] pub(crate) fn is_leaf(&self) -> bool { (self.0 & FLAG_MASK) == FLAG_HASH_LEAF } #[inline(always)] pub(crate) fn is_branch(&self) -> bool { (self.0 & FLAG_MASK) == FLAG_HASH_BRANCH } } impl Leaf { #[inline(always)] #[cfg(test)] fn set_slots(&mut self, c: usize) { debug_assert_leaf!(self); unsafe { (*self.ctrl.a).0.set_slots(c) } } #[inline(always)] pub(crate) fn slots(&self) -> usize { debug_assert_leaf!(self); unsafe { self.ctrl.a.0.slots() } } #[inline(always)] fn inc_slots(&mut self) { debug_assert_leaf!(self); unsafe { (*self.ctrl.a).0.inc_slots() } } #[inline(always)] fn dec_slots(&mut self) { debug_assert_leaf!(self); unsafe { (*self.ctrl.a).0.dec_slots() } } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { debug_assert_leaf!(self); unsafe { self.ctrl.a.0.get_txid() } } pub(crate) fn count(&self) -> usize { let mut c = 0; for slot_idx in 0..self.slots() { c += unsafe { (*self.values[slot_idx].as_ptr()).len() }; } c } pub(crate) fn get_ref(&self, h: u64, k: &Q) -> Option<&V> where K: Borrow, Q: Eq + ?Sized, { debug_assert_leaf!(self); leaf_simd_search(self, h, k) .ok() .map(|(slot_idx, bk_idx)| unsafe { let bucket = (*self.values[slot_idx].as_ptr()).as_slice(); &(bucket.get_unchecked(bk_idx).v) }) } pub(crate) fn get_mut_ref(&mut self, h: u64, k: &Q) -> Option<&mut V> where K: Borrow, Q: Eq + ?Sized, { debug_assert_leaf!(self); leaf_simd_search(self, h, k) .ok() .map(|(slot_idx, bk_idx)| unsafe { let bucket = (*self.values[slot_idx].as_mut_ptr()).as_mut_slice(); &mut (bucket.get_unchecked_mut(bk_idx).v) }) } /* pub(crate) fn get_slot_mut_ref(&mut self, h: u64) -> Option<&mut [Datum]> where K: Borrow, Q: Eq + ?Sized, { debug_assert_leaf!(self); unsafe { leaf_simd_get_slot(self, h) .map(|slot_idx| (*self.values[slot_idx].as_mut_ptr()).as_mut_slice()) } } */ #[inline(always)] pub(crate) fn get_kv_idx_checked(&self, slot_idx: usize, bk_idx: usize) -> Option<(&K, &V)> { debug_assert_leaf!(self); if slot_idx < self.slots() { let bucket = unsafe { (*self.values[slot_idx].as_ptr()).as_slice() }; bucket.get(bk_idx).map(|d| (&d.k, &d.v)) } else { None } } pub(crate) fn min(&self) -> u64 { debug_assert!(self.slots() > 0); unsafe { self.ctrl.a.1[0] } } pub(crate) fn max(&self) -> u64 { debug_assert!(self.slots() > 0); unsafe { self.ctrl.a.1[self.slots() - 1] } } pub(crate) fn req_clone(&self, txid: u64) -> Option<*mut Node> { debug_assert_leaf!(self); debug_assert!(txid < (TXID_MASK >> TXID_SHF)); if self.get_txid() == txid { // Same txn, no action needed. None } else { // println!("Req clone leaf"); // debug_assert!(false); // Diff txn, must clone. let new_txid = (unsafe { self.ctrl.a.0 .0 } & (FLAG_MASK | COUNT_MASK)) | (txid << TXID_SHF); let x: Box>> = Box::new(CachePadded::new(Leaf { ctrl: Ctrl { simd: ManuallyDrop::new(u64x8::from_array([ new_txid, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, ])), }, #[cfg(all(test, not(miri)))] poison: FLAG_POISON, values: unsafe { MaybeUninit::uninit().assume_init() }, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); let x = Box::into_raw(x); let xr = x as *mut Leaf; // Dup the keys unsafe { ptr::copy_nonoverlapping( &self.ctrl.a.1 as *const u64, (*(*xr).ctrl.a).1.as_mut_ptr(), H_CAPACITY, ) } // Copy in the values to the correct location. for idx in 0..self.slots() { unsafe { let lvalue = (*self.values[idx].as_ptr()).clone(); (*x).values[idx].as_mut_ptr().write(lvalue); } } Some(x as *mut Node) } } pub(crate) fn insert_or_update(&mut self, h: u64, k: K, mut v: V) -> LeafInsertState { debug_assert_leaf!(self); // Find the location we need to update let r = leaf_simd_search(self, h, &k); match r { KeyLoc::Ok(slot_idx, bk_idx) => { // It exists at idx, replace the value. let bucket = unsafe { &mut (*self.values[slot_idx].as_mut_ptr()) }; let prev = unsafe { bucket.as_mut_slice().get_unchecked_mut(bk_idx) }; std::mem::swap(&mut prev.v, &mut v); // Prev now contains the original value, return it! LeafInsertState::Ok(Some(v)) } KeyLoc::Collision(slot_idx) => { // The hash collided, but that's okay! We just append to the slice. let bucket = unsafe { &mut (*self.values[slot_idx].as_mut_ptr()) }; bucket.push(Datum { k, v }); LeafInsertState::Ok(None) } KeyLoc::Missing(idx) => { if self.slots() >= H_CAPACITY { // Overflow to a new node if idx >= self.slots() { // Greate than all else, split right let rnode = Node::new_leaf_ins(unsafe { self.ctrl.a.0 .0 }, h, k, v); LeafInsertState::Split(rnode) } else if idx == 0 { // Lower than all else, split left. // let lnode = ...; let lnode = Node::new_leaf_ins(unsafe { self.ctrl.a.0 .0 }, h, k, v); LeafInsertState::RevSplit(lnode) } else { // Within our range, pop max, insert, and split right. // This is not a bucket add, it's a new bucket! let pk = unsafe { slice_remove(&mut (*self.ctrl.a).1, H_CAPACITY - 1) }; let pbk = unsafe { slice_remove(&mut self.values, H_CAPACITY - 1).assume_init() }; let rnode = Node::new_leaf_bk(unsafe { self.ctrl.a.0 .0 }, pk, pbk); unsafe { slice_insert(&mut (*self.ctrl.a).1, h, idx); slice_insert( &mut self.values, MaybeUninit::new(smallvec![Datum { k, v }]), idx, ); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); LeafInsertState::Split(rnode) } } else { // We have space. unsafe { // self.key[idx] = h; slice_insert(&mut (*self.ctrl.a).1, h, idx); slice_insert( &mut self.values, MaybeUninit::new(smallvec![Datum { k, v }]), idx, ); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); self.inc_slots(); LeafInsertState::Ok(None) } } } } pub(crate) fn remove(&mut self, h: u64, k: &Q) -> LeafRemoveState where K: Borrow, Q: Eq + ?Sized, { debug_assert_leaf!(self); if self.slots() == 0 { return LeafRemoveState::Shrink(None); } // We must have a value - where are you .... match leaf_simd_search(self, h, k).ok() { // Count still greater than 0, so Ok and None, None => LeafRemoveState::Ok(None), Some((slot_idx, bk_idx)) => { // pop from the bucket. let Datum { k: _pk, v: pv } = unsafe { (*self.values[slot_idx].as_mut_ptr()).remove(bk_idx) }; // How much remains? if unsafe { (*self.values[slot_idx].as_ptr()).is_empty() } { unsafe { // Get the kv out let _ = slice_remove(&mut (*self.ctrl.a).1, slot_idx); // AFTER the remove, set the top value to u64::MAX (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; // Remove the bucket. let _ = slice_remove(&mut self.values, slot_idx).assume_init(); } self.dec_slots(); if self.slots() == 0 { LeafRemoveState::Shrink(Some(pv)) } else { LeafRemoveState::Ok(Some(pv)) } } else { // The bucket lives! LeafRemoveState::Ok(Some(pv)) } } } } pub(crate) fn take_from_l_to_r(&mut self, right: &mut Self) { debug_assert!(right.slots() == 0); let slots = self.slots() / 2; let start_idx = self.slots() - slots; //move key and values unsafe { slice_move( &mut (*right.ctrl.a).1, 0, &mut (*self.ctrl.a).1, start_idx, slots, ); slice_move(&mut right.values, 0, &mut self.values, start_idx, slots); // Update the left keys to be valid. // so we took from: // [ a, b, c, d, e, f, g ] // ^ ^ // | slots // start // so we need to fill from start_idx + slots let tgt_ptr = (*self.ctrl.a).1.as_mut_ptr().add(start_idx); // https://doc.rust-lang.org/std/ptr/fn.write_bytes.html // Sets count * size_of::() bytes of memory starting at dst to val. ptr::write_bytes::(tgt_ptr, 0xff, slots); #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); } // update the slotss unsafe { (*self.ctrl.a).0.set_slots(start_idx); (*right.ctrl.a).0.set_slots(slots); } } pub(crate) fn take_from_r_to_l(&mut self, right: &mut Self) { debug_assert!(self.slots() == 0); let slots = right.slots() / 2; let start_idx = right.slots() - slots; // Move values from right to left. unsafe { slice_move(&mut (*self.ctrl.a).1, 0, &mut (*right.ctrl.a).1, 0, slots); slice_move(&mut self.values, 0, &mut right.values, 0, slots); } // Shift the values in right down. unsafe { ptr::copy( (*right.ctrl.a).1.as_ptr().add(slots), (*right.ctrl.a).1.as_mut_ptr(), start_idx, ); ptr::copy( right.values.as_ptr().add(slots), right.values.as_mut_ptr(), start_idx, ); } // Fix the slotss. unsafe { (*self.ctrl.a).0.set_slots(slots); (*right.ctrl.a).0.set_slots(start_idx); } // Update the upper keys in right unsafe { let tgt_ptr = (*right.ctrl.a).1.as_mut_ptr().add(start_idx); // https://doc.rust-lang.org/std/ptr/fn.write_bytes.html // Sets count * size_of::() bytes of memory starting at dst to val. ptr::write_bytes::(tgt_ptr, 0xff, H_CAPACITY - start_idx); #[cfg(all(test, not(miri)))] debug_assert!(right.poison == FLAG_POISON); } } /* pub(crate) fn remove_lt(&mut self, k: &Q) -> LeafPruneState where K: Borrow, Q: Ord, { unimplemented!(); } */ #[inline(always)] pub(crate) fn merge(&mut self, right: &mut Self) { debug_assert_leaf!(self); debug_assert_leaf!(right); let sc = self.slots(); let rc = right.slots(); unsafe { slice_merge(&mut (*self.ctrl.a).1, sc, &mut (*right.ctrl.a).1, rc); slice_merge(&mut self.values, sc, &mut right.values, rc); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); unsafe { (*self.ctrl.a).0.add_slots(right.count()); (*right.ctrl.a).0.set_slots(0); } debug_assert!(self.verify()); } pub(crate) fn verify(&self) -> bool { debug_assert_leaf!(self); #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); // println!("verify leaf -> {:?}", self); if unsafe { self.ctrl.a.0.slots() } == 0 { return true; } // Check everything above slots is u64::max for work_idx in unsafe { (*self.ctrl.a).0.slots() }..H_CAPACITY { debug_assert!(unsafe { (*self.ctrl.a).1[work_idx] } == u64::MAX); } // Check key sorting let mut lk: &u64 = unsafe { &(*self.ctrl.a).1[0] }; for work_idx in 1..unsafe { self.ctrl.a.0.slots() } { let rk: &u64 = unsafe { &(*self.ctrl.a).1[work_idx] }; // Eq not ok as we have buckets. if lk >= rk { // println!("{:?}", self); if cfg!(test) { return false; } else { debug_assert!(false); } } lk = rk; } true } fn free(node: *mut Self) { unsafe { let _x: Box>> = Box::from_raw(node as *mut CachePadded>); } } } impl Debug for Leaf { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), Error> { debug_assert_leaf!(self); write!(f, "Leaf -> {}", self.slots())?; #[cfg(all(test, not(miri)))] write!(f, " nid: {}", self.nid)?; write!(f, " \\-> [ ")?; for idx in 0..self.slots() { write!(f, "{:?}, ", unsafe { self.ctrl.a.1[idx] })?; write!(f, "[")?; for d in unsafe { (*self.values[idx].as_ptr()).as_slice().iter() } { write!(f, "{:?}, ", d.k)?; } write!(f, "], ")?; } write!(f, " ]") } } impl Drop for Leaf { fn drop(&mut self) { debug_assert_leaf!(self); #[cfg(all(test, not(miri)))] release_nid(self.nid); // Due to the use of maybe uninit we have to drop any contained values. unsafe { for idx in 0..self.slots() { ptr::drop_in_place(self.values[idx].as_mut_ptr()); } } // Done unsafe { (*self.ctrl.a).0 .0 = FLAG_DROPPED; } debug_assert!(unsafe { self.ctrl.a.0 .0 } & FLAG_MASK != FLAG_HASH_LEAF); // #[cfg(test)] // println!("set leaf {:?} to {:x}", self.nid, self.meta.0); } } impl Branch { #[inline(always)] #[allow(unused)] fn set_slots(&mut self, c: usize) { debug_assert_branch!(self); unsafe { (*self.ctrl.a).0.set_slots(c) } } #[inline(always)] pub(crate) fn slots(&self) -> usize { debug_assert_branch!(self); unsafe { self.ctrl.a.0.slots() } } #[inline(always)] fn inc_slots(&mut self) { debug_assert_branch!(self); unsafe { (*self.ctrl.a).0.inc_slots() } } #[inline(always)] fn dec_slots(&mut self) { debug_assert_branch!(self); unsafe { (*self.ctrl.a).0.dec_slots() } } #[inline(always)] pub(crate) fn get_txid(&self) -> u64 { debug_assert_branch!(self); unsafe { self.ctrl.a.0.get_txid() } } // Can't inline as this is recursive! pub(crate) fn min(&self) -> u64 { debug_assert_branch!(self); unsafe { (*self.nodes[0]).min() } } // Can't inline as this is recursive! pub(crate) fn max(&self) -> u64 { debug_assert_branch!(self); // Remember, self.slots() is + 1 offset, so this gets // the max node unsafe { (*self.nodes[self.slots()]).max() } } pub(crate) fn req_clone(&self, txid: u64) -> Option<*mut Node> { debug_assert_branch!(self); if self.get_txid() == txid { // Same txn, no action needed. None } else { // println!("Req clone branch"); // Diff txn, must clone. let new_txid = (unsafe { self.ctrl.a.0 .0 } & (FLAG_MASK | COUNT_MASK)) | (txid << TXID_SHF); let x: Box>> = Box::new(CachePadded::new(Branch { // This sets the default (key) slots to 1, since we take an l/r ctrl: Ctrl { simd: ManuallyDrop::new(u64x8::from_array([ new_txid, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, u64::MAX, ])), }, #[cfg(all(test, not(miri)))] poison: FLAG_POISON, // Can clone the node pointers. nodes: self.nodes, #[cfg(all(test, not(miri)))] nid: alloc_nid(), })); let x = Box::into_raw(x); let xr = x as *mut Branch; // Dup the keys unsafe { ptr::copy_nonoverlapping( &self.ctrl.a.1 as *const u64, (*(*xr).ctrl.a).1.as_mut_ptr(), H_CAPACITY, ) } Some(x as *mut Node) } } #[inline(always)] pub(crate) fn locate_node(&self, h: u64) -> usize { debug_assert_branch!(self); // If the value is Ok(idx), then that means // we were located to the right node. This is because we // exactly hit and located on the key. // // If the value is Err(idx), then we have the exact index already. // as branches is of-by-one. match branch_simd_search::(self, h) { Err(idx) => idx, Ok(idx) => idx + 1, } } #[inline(always)] pub(crate) fn get_idx_unchecked(&self, idx: usize) -> *mut Node { debug_assert_branch!(self); debug_assert!(idx <= self.slots()); debug_assert!(!self.nodes[idx].is_null()); self.nodes[idx] } #[inline(always)] pub(crate) fn get_idx_checked(&self, idx: usize) -> Option<*mut Node> { debug_assert_branch!(self); // Remember, that nodes can have +1 to slots which is why <= here, not <. if idx <= self.slots() { debug_assert!(!self.nodes[idx].is_null()); Some(self.nodes[idx]) } else { None } } #[cfg(test)] pub(crate) fn get_ref(&self, h: u64, k: &Q) -> Option<&V> where K: Borrow, Q: Eq, { debug_assert_branch!(self); let idx = self.locate_node(h); unsafe { (*self.nodes[idx]).get_ref(h, k) } } pub(crate) fn add_node(&mut self, node: *mut Node) -> BranchInsertState { debug_assert_branch!(self); // do we have space? if self.slots() == H_CAPACITY { // if no space -> // split and send two nodes back for new branch // There are three possible states that this causes. // 1 * The inserted node is the greater than all current values, causing l(max, node) // to be returned. // 2 * The inserted node is between max - 1 and max, causing l(node, max) to be returned. // 3 * The inserted node is a low/middle value, causing max and max -1 to be returned. // let kr: u64 = unsafe { (*node).min() }; let r = branch_simd_search(self, kr); let ins_idx = r.unwrap_err(); // Everything will pop max. let max = unsafe { *(self.nodes.get_unchecked(HBV_CAPACITY - 1)) }; let res = match ins_idx { // Case 1 H_CAPACITY => { // println!("case 1"); // Greater than all current values, so we'll just return max and node. unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; } // Now setup the ret val NOTICE compared to case 2 that we swap node and max? BranchInsertState::Split(max, node) } // Case 2 H_CAPACITY_N1 => { // println!("case 2"); // Greater than all but max, so we return max and node in the correct order. // Drop the key between them. unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; } // Now setup the ret val NOTICE compared to case 1 that we swap node and max? BranchInsertState::Split(node, max) } // Case 3 ins_idx => { // Get the max - 1 and max nodes out. let maxn1 = unsafe { *(self.nodes.get_unchecked(HBV_CAPACITY - 2)) }; unsafe { // Drop the key between them. (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; // Drop the key before us that we are about to replace. (*self.ctrl.a).1[H_CAPACITY - 2] = u64::MAX; } // Add node and it's key to the correct location. let leaf_ins_idx = ins_idx + 1; unsafe { slice_insert(&mut (*self.ctrl.a).1, kr, ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); BranchInsertState::Split(maxn1, max) } }; // Dec slots as we always reduce branch by one as we split return // two. self.dec_slots(); res } else { // if space -> // Get the nodes min-key - we clone it because we'll certainly be inserting it! let k: u64 = unsafe { (*node).min() }; // bst and find when min-key < key[idx] let r = branch_simd_search(self, k); // if r is ever found, I think this is a bug, because we should never be able to // add a node with an existing min. // // [ 5 ] // / \ // [0,] [5,] // // So if we added here to [0, ], and it had to overflow to split, then everything // must be < 5. Why? Because to get to [0,] as your insert target, you must be < 5. // if we added to [5,] then a split must be greater than, or the insert would replace 5. // // if we consider // // [ 5 ] // / \ // [0,] [7,] // // Now we insert 5, and 7, splits. 5 would remain in the tree and we'd split 7 to the right // // As a result, any "Ok(idx)" must represent a corruption of the tree. // debug_assert!(r.is_err()); let ins_idx = r.unwrap_err(); let leaf_ins_idx = ins_idx + 1; // So why do we only need to insert right? Because the left-most // leaf when it grows, it splits to the right. That importantly // means that we only need to insert to replace the min and it's // right leaf, or anything higher. As a result, we are always // targetting ins_idx and leaf_ins_idx = ins_idx + 1. // // We have a situation like: // // [1, 3, 9, 18] // // and ins_idx is 2. IE: // // [1, 3, 9, 18] // ^-- k=6 // // So this we need to shift those r-> and insert. // // [1, 3, x, 9, 18] // ^-- k=6 // // [1, 3, 6, 9, 18] // // Now we need to consider the leaves too: // // [1, 3, 9, 18] // | | | | | // v v v v v // 0 1 3 9 18 // // So that means we need to move leaf_ins_idx = (ins_idx + 1) // right also // // [1, 3, x, 9, 18] // | | | | | | // v v v v v v // 0 1 3 x 9 18 // ^-- leaf for k=6 will go here. // // Now to talk about the right expand issue - lets say 0 conducted // a split, it returns the new right node - which would push // 3 to the right to insert a new right hand side as required. So we // really never need to consider the left most leaf to have to be // replaced in any conditions. // // Magic! unsafe { slice_insert(&mut (*self.ctrl.a).1, k, ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); // finally update the slots self.inc_slots(); // Return that we are okay to go! BranchInsertState::Ok } } pub(crate) fn add_node_left( &mut self, lnode: *mut Node, sibidx: usize, ) -> BranchInsertState { debug_assert_branch!(self); if self.slots() == H_CAPACITY { if sibidx == self.slots() { // If sibidx == self.slots, then we must be going into max - 1. // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, k5, xx ] // [ v1, v2, v3, v4, v5, v6, xx ] // // [ k1, k2, k3, k4, k5, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, v6, xx ] -> [ ln, v7 ] // // So in this case we drop k6, and return a split. let max = self.nodes[HBV_CAPACITY - 1]; unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; } self.dec_slots(); BranchInsertState::Split(lnode, max) } else if sibidx == (self.slots() - 1) { // If sibidx == (self.slots - 1), then we must be going into max - 2 // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, dd, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // // // This means that we need to return v6,v7 in a split, and // just append node after v5. let maxn1 = self.nodes[HBV_CAPACITY - 2]; let max = self.nodes[HBV_CAPACITY - 1]; unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; (*self.ctrl.a).1[H_CAPACITY - 2] = u64::MAX; } self.dec_slots(); self.dec_slots(); // [ k1, k2, k3, k4, dd, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, xx, xx ] -> [ v6, v7 ] let h: u64 = unsafe { (*lnode).min() }; unsafe { slice_insert(&mut (*self.ctrl.a).1, h, sibidx - 1); slice_insert(&mut self.nodes, lnode, sibidx); // slice_insert(&mut self.node, MaybeUninit::new(node), sibidx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); self.inc_slots(); // // [ k1, k2, k3, k4, nk, xx ] [ k6 ] // [ v1, v2, v3, v4, v5, ln, xx ] -> [ v6, v7 ] BranchInsertState::Split(maxn1, max) } else { // All other cases; // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, dd, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // // [ k1, k2, k3, nk, k4, dd ] [ k6 ] // [ v1, v2, v3, ln, v4, v5, xx ] -> [ v6, v7 ] // // This means that we need to return v6,v7 in a split,, drop k5, // then insert // Setup the nodes we intend to split away. let maxn1 = self.nodes[HBV_CAPACITY - 2]; let max = self.nodes[HBV_CAPACITY - 1]; unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; (*self.ctrl.a).1[H_CAPACITY - 2] = u64::MAX; } self.dec_slots(); self.dec_slots(); // println!("pre-fixup -> {:?}", self); let sibnode = self.nodes[sibidx]; let nkey: u64 = unsafe { (*sibnode).min() }; unsafe { slice_insert(&mut (*self.ctrl.a).1, nkey, sibidx); slice_insert(&mut self.nodes, lnode, sibidx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); self.inc_slots(); // println!("post fixup -> {:?}", self); BranchInsertState::Split(maxn1, max) } } else { // We have space, so just put it in! // [ k1, k2, k3, k4, xx, xx ] // [ v1, v2, v3, v4, v5, xx, xx ] // ^ ^-- sibidx // \---- where left should go // // [ k1, k2, k3, k4, xx, xx ] // [ v1, v2, v3, ln, v4, v5, xx ] // // [ k1, k2, k3, nk, k4, xx ] // [ v1, v2, v3, ln, v4, v5, xx ] // let sibnode = self.nodes[sibidx]; let nkey: u64 = unsafe { (*sibnode).min() }; unsafe { slice_insert(&mut self.nodes, lnode, sibidx); slice_insert(&mut (*self.ctrl.a).1, nkey, sibidx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); self.inc_slots(); // println!("post fixup -> {:?}", self); BranchInsertState::Ok } } fn remove_by_idx(&mut self, idx: usize) -> *mut Node { debug_assert_branch!(self); debug_assert!(idx <= self.slots()); debug_assert!(idx > 0); // remove by idx. let _pk = unsafe { slice_remove(&mut (*self.ctrl.a).1, idx - 1) }; // AFTER the remove, set the top value to u64::MAX unsafe { (*self.ctrl.a).1[H_CAPACITY - 1] = u64::MAX; } let pn = unsafe { slice_remove(&mut self.nodes, idx) }; self.dec_slots(); pn } pub(crate) fn shrink_decision(&mut self, ridx: usize) -> BranchShrinkState { // Given two nodes, we need to decide what to do with them! // // Remember, this isn't happening in a vacuum. This is really a manipulation of // the following structure: // // parent (self) // / \ // left right // // We also need to consider the following situation too: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 r2 // // Imagine we have exhausted r2, so we need to merge: // // root // / \ // lbranch rbranch // / \ / \ // l1 l2 r1 <<-- r2 // // This leaves us with a partial state of // // root // / \ // lbranch rbranch (invalid!) // / \ / // l1 l2 r1 // // This means rbranch issues a cloneshrink to root. clone shrink must contain the remainer // so that it can be reparented: // // root // / // lbranch -- // / \ \ // l1 l2 r1 // // Now root has to shrink too. // // root -- // / \ \ // l1 l2 r1 // // So, we have to analyse the situation. // * Have left or right been emptied? (how to handle when branches) // * Is left or right belowe a reasonable threshold? // * Does the opposite have capacity to remain valid? debug_assert_branch!(self); debug_assert!(ridx > 0 && ridx <= self.slots()); let left = self.nodes[ridx - 1]; let right = self.nodes[ridx]; debug_assert!(!left.is_null()); debug_assert!(!right.is_null()); match unsafe { (*left).ctrl.a.0 .0 & FLAG_MASK } { FLAG_HASH_LEAF => { let lmut = leaf_ref!(left, K, V); let rmut = leaf_ref!(right, K, V); if lmut.slots() + rmut.slots() <= H_CAPACITY { lmut.merge(rmut); // remove the right node from parent let dnode = self.remove_by_idx(ridx); debug_assert!(dnode == right); if self.slots() == 0 { // We now need to be merged across as we only contain a single // value now. BranchShrinkState::Shrink(dnode) } else { // We are complete! // #[cfg(test)] // println!("🔥 {:?}", rmut.nid); BranchShrinkState::Merge(dnode) } } else if rmut.slots() > (H_CAPACITY / 2) { lmut.take_from_r_to_l(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else if lmut.slots() > (H_CAPACITY / 2) { lmut.take_from_l_to_r(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else { // Do nothing BranchShrinkState::Balanced } } FLAG_HASH_BRANCH => { // right or left is now in a "corrupt" state with a single value that we need to relocate // to left - or we need to borrow from left and fix it! let lmut = branch_ref!(left, K, V); let rmut = branch_ref!(right, K, V); debug_assert!(rmut.slots() == 0 || lmut.slots() == 0); debug_assert!(rmut.slots() <= H_CAPACITY || lmut.slots() <= H_CAPACITY); // println!("branch {:?} {:?}", lmut.slots(), rmut.slots()); if lmut.slots() == H_CAPACITY { // println!("branch take_from_l_to_r "); lmut.take_from_l_to_r(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else if rmut.slots() == H_CAPACITY { // println!("branch take_from_r_to_l "); lmut.take_from_r_to_l(rmut); self.rekey_by_idx(ridx); BranchShrinkState::Balanced } else { // println!("branch merge"); // merge the right to tail of left. // println!("BL {:?}", lmut); // println!("BR {:?}", rmut); lmut.merge(rmut); // println!("AL {:?}", lmut); // println!("AR {:?}", rmut); // Reduce our slots let dnode = self.remove_by_idx(ridx); debug_assert!(dnode == right); if self.slots() == 0 { // We now need to be merged across as we also only contain a single // value now. BranchShrinkState::Shrink(dnode) } else { // We are complete! // #[cfg(test)] // println!("🚨 {:?}", rmut.nid); BranchShrinkState::Merge(dnode) } } } _ => unreachable!(), } } #[inline(always)] pub(crate) fn extract_last_node(&self) -> *mut Node { debug_assert_branch!(self); self.nodes[0] } pub(crate) fn rekey_by_idx(&mut self, idx: usize) { debug_assert_branch!(self); debug_assert!(idx <= self.slots()); debug_assert!(idx > 0); // For the node listed, rekey it. let nref = self.nodes[idx]; unsafe { (*self.ctrl.a).1[idx - 1] = (*nref).min() }; } #[inline(always)] pub(crate) fn merge(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); let sc = self.slots(); let rc = right.slots(); if rc == 0 { let node = right.nodes[0]; debug_assert!(!node.is_null()); let h: u64 = unsafe { (*node).min() }; let ins_idx = self.slots(); let leaf_ins_idx = ins_idx + 1; unsafe { slice_insert(&mut (*self.ctrl.a).1, h, ins_idx); slice_insert(&mut self.nodes, node, leaf_ins_idx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); self.inc_slots(); } else { debug_assert!(sc == 0); unsafe { // Move all the nodes from right. slice_merge(&mut self.nodes, 1, &mut right.nodes, rc + 1); // Move the related keys. slice_merge(&mut (*self.ctrl.a).1, 1, &mut (*right.ctrl.a).1, rc); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); // Set our slots correctly. unsafe { (*self.ctrl.a).0.set_slots(rc + 1); } // Set right len to 0 unsafe { (*right.ctrl.a).0.set_slots(0); } // rekey the lowest pointer. unsafe { let nptr = self.nodes[1]; let h: u64 = (*nptr).min(); (*self.ctrl.a).1[0] = h; } // done! } } pub(crate) fn take_from_l_to_r(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); debug_assert!(self.slots() > right.slots()); // Starting index of where we move from. We work normally from a branch // with only zero (but the base) branch item, but we do the math anyway // to be sure incase we change later. // // So, self.len must be larger, so let's give a few examples here. // 4 = 7 - (7 + 0) / 2 (will move 4, 5, 6) // 3 = 6 - (6 + 0) / 2 (will move 3, 4, 5) // 3 = 5 - (5 + 0) / 2 (will move 3, 4) // 2 = 4 .... (will move 2, 3) // let slots = (self.slots() + right.slots()) / 2; let start_idx = self.slots() - slots; // Move the remaining element from r to the correct location. // // [ k1, k2, k3, k4, k5, k6 ] // [ v1, v2, v3, v4, v5, v6, v7 ] -> [ v8, ------- ] // // To: // // [ k1, k2, k3, k4, k5, k6 ] [ --, --, --, --, ... // [ v1, v2, v3, v4, v5, v6, v7 ] -> [ --, --, --, v8, --, ... // unsafe { ptr::swap( right.nodes.get_unchecked_mut(0), right.nodes.get_unchecked_mut(slots), ) } // Move our values from the tail. // We would move 3 now to: // // [ k1, k2, k3, k4, k5, k6 ] [ --, --, --, --, ... // [ v1, v2, v3, v4, --, --, -- ] -> [ v5, v6, v7, v8, --, ... // unsafe { slice_move(&mut right.nodes, 0, &mut self.nodes, start_idx + 1, slots); } // Remove the keys from left. // So we need to remove the corresponding keys. so that we get. // // [ k1, k2, k3, --, --, -- ] [ --, --, --, --, ... // [ v1, v2, v3, v4, --, --, -- ] -> [ v5, v6, v7, v8, --, ... // // This means it's start_idx - 1 up to BK cap unsafe { let tgt_ptr = (*self.ctrl.a).1.as_mut_ptr().add(start_idx); // https://doc.rust-lang.org/std/ptr/fn.write_bytes.html // Sets count * size_of::() bytes of memory starting at dst to val. ptr::write_bytes::(tgt_ptr, 0xff, H_CAPACITY - start_idx); } // Adjust both slotss - we do this before rekey to ensure that the safety // checks hold in debugging. unsafe { (*right.ctrl.a).0.set_slots(slots); } unsafe { (*self.ctrl.a).0.set_slots(start_idx); } // Rekey right for kidx in 1..(slots + 1) { right.rekey_by_idx(kidx); } #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); #[cfg(all(test, not(miri)))] debug_assert!(right.poison == FLAG_POISON); // Done! debug_assert!(self.verify()); debug_assert!(right.verify()); } pub(crate) fn take_from_r_to_l(&mut self, right: &mut Self) { debug_assert_branch!(self); debug_assert_branch!(right); debug_assert!(right.slots() >= self.slots()); let slots = (self.slots() + right.slots()) / 2; let start_idx = right.slots() - slots; // We move slots from right to left. unsafe { slice_move(&mut self.nodes, 1, &mut right.nodes, 0, slots); } // move keys down in right unsafe { ptr::copy( right.ctrl.a.1.as_ptr().add(slots), (*right.ctrl.a).1.as_mut_ptr(), start_idx, ); } // Fix up the upper keys /* for idx in start_idx..H_CAPACITY { right.key[idx] = u64::MAX; } */ unsafe { let tgt_ptr = (*right.ctrl.a).1.as_mut_ptr().add(start_idx); // https://doc.rust-lang.org/std/ptr/fn.write_bytes.html // Sets count * size_of::() bytes of memory starting at dst to val. ptr::write_bytes::(tgt_ptr, 0xff, H_CAPACITY - start_idx); } #[cfg(all(test, not(miri)))] debug_assert!(right.poison == FLAG_POISON); // move nodes down in right unsafe { ptr::copy( right.nodes.as_ptr().add(slots), right.nodes.as_mut_ptr(), start_idx + 1, ); } // update slotss unsafe { (*right.ctrl.a).0.set_slots(start_idx); } unsafe { (*self.ctrl.a).0.set_slots(slots); } // Rekey left for kidx in 1..(slots + 1) { self.rekey_by_idx(kidx); } debug_assert!(self.verify()); debug_assert!(right.verify()); // Done! } #[inline(always)] pub(crate) fn replace_by_idx(&mut self, idx: usize, node: *mut Node) { debug_assert_branch!(self); debug_assert!(idx <= self.slots()); debug_assert!(!self.nodes[idx].is_null()); self.nodes[idx] = node; } pub(crate) fn clone_sibling_idx( &mut self, txid: u64, idx: usize, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> usize { debug_assert_branch!(self); // if we clone, return Some new ptr. if not, None. let (ridx, idx) = if idx == 0 { // println!("clone_sibling_idx clone right"); // If we are 0 we clone our right sibling, // and return thet right idx as 1. (1, 1) } else { // println!("clone_sibling_idx clone left"); // Else we clone the left, and leave our index unchanged // as we are the right node. (idx, idx - 1) }; // Now clone the item at idx. debug_assert!(idx <= self.slots()); let sib_ptr = self.nodes[idx]; debug_assert!(!sib_ptr.is_null()); // Do we need to clone? let res = match unsafe { (*sib_ptr).ctrl.a.0 .0 } & FLAG_MASK { FLAG_HASH_LEAF => { let lref = unsafe { &*(sib_ptr as *const _ as *const Leaf) }; lref.req_clone(txid) } FLAG_HASH_BRANCH => { let bref = unsafe { &*(sib_ptr as *const _ as *const Branch) }; bref.req_clone(txid) } _ => unreachable!(), }; // If it did clone, it's a some, so we map that to have the from and new ptrs for // the memory management. if let Some(n_ptr) = res { // println!("ls push 101 {:?}", sib_ptr); first_seen.push(n_ptr); last_seen.push(sib_ptr); // Put the pointer in place. self.nodes[idx] = n_ptr; }; // Now return the right index ridx } /* pub(crate) fn trim_lt_key( &mut self, k: &Q, last_seen: &mut Vec<*mut Node>, first_seen: &mut Vec<*mut Node>, ) -> BranchTrimState where K: Borrow, Q: Ord, { debug_assert_branch!(self); // The possible states of a branch are // // [ 0, 4, 8, 12 ] // [n1, n2, n3, n4, n5] // let r = key_search!(self, k); let sc = self.slots(); match r { Ok(idx) => { debug_assert!(idx < sc); // * A key matches exactly a value. IE k is 4. This means we can remove // n1 and n2 because we know 4 must be in n3 as the min. // NEED MM debug_assert!(false); unsafe { slice_slide_and_drop(&mut self.key, idx, sc - (idx + 1)); slice_slide(&mut self.nodes.as_mut(), idx, sc - idx); } self.meta.set_slots(sc - (idx + 1)); if self.slots() == 0 { let rnode = self.extract_last_node(); BranchTrimState::Promote(rnode) } else { BranchTrimState::Complete } } Err(idx) => { if idx == 0 { // * The key is less than min. IE it wants to remove the lowest value. // Check the "max" value of the subtree to know if we can proceed. let tnode: *mut Node = self.nodes[0]; let branch_k: &K = unsafe { (*tnode).max() }; if branch_k.borrow() < k { // Everything is smaller, let's remove it that subtree. // NEED MM debug_assert!(false); let _pk = unsafe { slice_remove(&mut self.key, 0).assume_init() }; let _pn = unsafe { slice_remove(self.nodes.as_mut(), 0) }; self.dec_slots(); BranchTrimState::Complete } else { BranchTrimState::Complete } } else if idx >= self.slots() { // remove everything except max. unsafe { // NEED MM debug_assert!(false); // We just drop all the keys. for kidx in 0..self.slots() { ptr::drop_in_place(self.key[kidx].as_mut_ptr()); // ptr::drop_in_place(self.nodes[kidx].as_mut_ptr()); } // Move the last node to the bottom. self.nodes[0] = self.nodes[sc]; } self.meta.set_slots(0); let rnode = self.extract_last_node(); // Something may still be valid, hand it on. BranchTrimState::Promote(rnode) } else { // * A key is between two values. We can remove everything less, but not // the assocated. For example, remove 6 would cause n1, n2 to be removed, but // the prune/walk will have to examine n3 to know about further changes. debug_assert!(idx > 0); let tnode: *mut Node = self.nodes[0]; let branch_k: &K = unsafe { (*tnode).max() }; if branch_k.borrow() < k { // NEED MM debug_assert!(false); // Remove including idx. unsafe { slice_slide_and_drop(&mut self.key, idx, sc - (idx + 1)); slice_slide(self.nodes.as_mut(), idx, sc - idx); } self.meta.set_slots(sc - (idx + 1)); } else { // NEED MM debug_assert!(false); unsafe { slice_slide_and_drop(&mut self.key, idx - 1, sc - idx); slice_slide(self.nodes.as_mut(), idx - 1, sc - (idx - 1)); } self.meta.set_slots(sc - idx); } if self.slots() == 0 { // NEED MM debug_assert!(false); let rnode = self.extract_last_node(); BranchTrimState::Promote(rnode) } else { BranchTrimState::Complete } } } } } */ pub(crate) fn verify(&self) -> bool { debug_assert_branch!(self); #[cfg(all(test, not(miri)))] debug_assert!(self.poison == FLAG_POISON); if self.slots() == 0 { // Not possible to be valid! debug_assert!(false); return false; } // println!("verify branch -> {:?}", self); // Check everything above slots is u64::max for work_idx in unsafe { self.ctrl.a.0.slots() }..H_CAPACITY { if unsafe { self.ctrl.a.1[work_idx] } != u64::MAX { eprintln!("FAILED ARRAY -> {:?}", unsafe { self.ctrl.a.1 }); debug_assert!(false); } } // Check we are sorted. let mut lk: u64 = unsafe { self.ctrl.a.1[0] }; for work_idx in 1..self.slots() { let rk: u64 = unsafe { self.ctrl.a.1[work_idx] }; // println!("{:?} >= {:?}", lk, rk); if lk >= rk { debug_assert!(false); return false; } lk = rk; } // Recursively call verify for work_idx in 0..self.slots() { let node = unsafe { &*self.nodes[work_idx] }; if !node.verify() { for work_idx in 0..(self.slots() + 1) { let nref = unsafe { &*self.nodes[work_idx] }; if !nref.verify() { // println!("Failed children"); debug_assert!(false); return false; } } } } // Check descendants are validly ordered. // V-- remember, there are slots + 1 nodes. for work_idx in 0..self.slots() { // get left max and right min let lnode = unsafe { &*self.nodes[work_idx] }; let rnode = unsafe { &*self.nodes[work_idx + 1] }; let pkey = unsafe { self.ctrl.a.1[work_idx] }; let lkey: u64 = lnode.max(); let rkey: u64 = rnode.min(); if lkey >= pkey || pkey > rkey { /* println!("++++++"); println!("{:?} >= {:?}, {:?} > {:?}", lkey, pkey, pkey, rkey); println!("out of order key found {}", work_idx); // println!("left --> {:?}", lnode); // println!("right -> {:?}", rnode); println!("prnt -> {:?}", self); */ debug_assert!(false); return false; } } // All good! true } #[allow(clippy::cast_ptr_alignment)] fn free(node: *mut Self) { unsafe { let mut _x: Box>> = Box::from_raw(node as *mut CachePadded>); } } } impl Debug for Branch { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), Error> { debug_assert_branch!(self); write!(f, "Branch -> {}", self.slots())?; #[cfg(all(test, not(miri)))] write!(f, " nid: {}", self.nid)?; write!(f, " \\-> [ ")?; for idx in 0..self.slots() { write!(f, "{:?}, ", unsafe { self.ctrl.a.1[idx] })?; } write!(f, " ]") } } impl Drop for Branch { fn drop(&mut self) { debug_assert_branch!(self); #[cfg(all(test, not(miri)))] release_nid(self.nid); // Done unsafe { (*self.ctrl.a).0 .0 = FLAG_DROPPED; } debug_assert!(unsafe { self.ctrl.a.0 .0 } & FLAG_MASK != FLAG_HASH_BRANCH); // println!("set branch {:?} to {:x}", self.nid, self.meta.0); } } #[cfg(test)] mod tests { use super::*; /* #[test] fn test_hashmap2_node_cache_size() { let ls = std::mem::size_of::>() - std::mem::size_of::(); let bs = std::mem::size_of::>() - std::mem::size_of::(); println!("ls {:?}, bs {:?}", ls, bs); assert!(ls <= 128); assert!(bs <= 128); } */ #[test] fn test_hashmap2_node_test_weird_basics() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(leaf.get_txid() == 1); // println!("{:?}", leaf); leaf.set_slots(1); assert!(leaf.slots() == 1); leaf.set_slots(0); assert!(leaf.slots() == 0); leaf.inc_slots(); leaf.inc_slots(); leaf.inc_slots(); assert!(leaf.slots() == 3); leaf.dec_slots(); leaf.dec_slots(); leaf.dec_slots(); assert!(leaf.slots() == 0); /* let branch: *mut Branch = Node::new_branch(1, ptr::null_mut(), ptr::null_mut()); let branch = unsafe { &mut *branch }; assert!(branch.get_txid() == 1); // println!("{:?}", branch); branch.set_slots(3); assert!(branch.slots() == 3); branch.set_slots(0); assert!(branch.slots() == 0); Branch::free(branch as *mut _); */ Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_in_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(leaf.get_txid() == 1); // Check insert to capacity for kv in 0..H_CAPACITY { let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); // Check update to capacity for kv in 0..H_CAPACITY { let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_collision_in_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; let hash: u64 = 1; assert!(leaf.get_txid() == 1); // Check insert to capacity for kv in 0..H_CAPACITY { let r = leaf.insert_or_update(hash, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(hash, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); // Check update to capacity for kv in 0..H_CAPACITY { let r = leaf.insert_or_update(hash, kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(hash, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_out_of_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(H_CAPACITY <= 8); let kvs = [7, 5, 1, 6, 2, 3, 0, 8]; assert!(leaf.get_txid() == 1); // Check insert to capacity for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.slots() == H_CAPACITY); // Check update to capacity for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.slots() == H_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_collision_out_of_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; let hash: u64 = 1; assert!(H_CAPACITY <= 8); let kvs = [7, 5, 1, 6, 2, 3, 0, 8]; assert!(leaf.get_txid() == 1); // Check insert to capacity for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(hash, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(hash, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == H_CAPACITY); assert!(leaf.slots() == 1); // Check update to capacity for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(hash, kv, kv); if let LeafInsertState::Ok(Some(pkv)) = r { assert!(pkv == kv); assert!(leaf.get_ref(hash, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == H_CAPACITY); assert!(leaf.slots() == 1); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_min() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(H_CAPACITY <= 8); let kvs = [3, 2, 6, 4, 5, 1, 9, 0]; let min: [u64; 8] = [3, 2, 2, 2, 2, 1, 1, 0]; for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); assert!(leaf.min() == min[idx]); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.slots() == H_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_max() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; assert!(H_CAPACITY <= 8); let kvs = [1, 3, 2, 6, 4, 5, 9, 0]; let max: [u64; 8] = [1, 3, 3, 6, 6, 6, 9, 9]; for idx in 0..H_CAPACITY { let kv = kvs[idx]; let r = leaf.insert_or_update(kv as u64, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(kv as u64, &kv) == Some(&kv)); assert!(leaf.max() == max[idx]); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.slots() == H_CAPACITY); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_remove_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { leaf.insert_or_update(kv as u64, kv, kv); } // Remove all but one. for kv in 0..(H_CAPACITY - 1) { let r = leaf.remove(kv as u64, &kv); if let LeafRemoveState::Ok(Some(rkv)) = r { assert!(rkv == kv); } else { assert!(false); } } assert!(leaf.slots() == 1); assert!(leaf.max() == (H_CAPACITY - 1) as u64); // Remove a non-existant value. let r = leaf.remove((H_CAPACITY + 20) as u64, &(H_CAPACITY + 20)); if let LeafRemoveState::Ok(None) = r { // Ok! } else { assert!(false); } // Finally clear the node, should request a shrink. let kv = H_CAPACITY - 1; let r = leaf.remove(kv as u64, &kv); if let LeafRemoveState::Shrink(Some(rkv)) = r { assert!(rkv == kv); } else { assert!(false); } assert!(leaf.slots() == 0); // Remove non-existant post shrink. Should never happen // but safety first! let r = leaf.remove(0, &0); if let LeafRemoveState::Shrink(None) = r { // Ok! } else { assert!(false); } assert!(leaf.slots() == 0); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_remove_out_of_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { leaf.insert_or_update(kv as u64, kv, kv); } let mid = H_CAPACITY / 2; // This test removes all BUT one node to keep the states simple. for kv in mid..(H_CAPACITY - 1) { let r = leaf.remove(kv as u64, &kv); match r { LeafRemoveState::Ok(_) => {} _ => panic!(), } } for kv in 0..(H_CAPACITY / 2) { let r = leaf.remove(kv as u64, &kv); match r { LeafRemoveState::Ok(_) => {} _ => panic!(), } } assert!(leaf.slots() == 1); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_remove_collision_in_order() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; let hash: u64 = 1; assert!(leaf.get_txid() == 1); // Check insert to capacity for kv in 0..H_CAPACITY { let r = leaf.insert_or_update(hash, kv, kv); if let LeafInsertState::Ok(None) = r { assert!(leaf.get_ref(hash, &kv) == Some(&kv)); } else { assert!(false); } } assert!(leaf.verify()); assert!(leaf.count() == H_CAPACITY); assert!(leaf.slots() == 1); // Check remove to cap - 1 for kv in 1..H_CAPACITY { let r = leaf.remove(hash, &kv); match r { LeafRemoveState::Ok(_) => {} _ => panic!(), } } assert!(leaf.count() == 1); assert!(leaf.slots() == 1); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } #[test] fn test_hashmap2_node_leaf_insert_split() { let leaf: *mut Leaf = Node::new_leaf(1); let leaf = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { let x = kv + 10; leaf.insert_or_update(x as u64, x, x); } // Split right let y = H_CAPACITY + 10; let r = leaf.insert_or_update(y as u64, y, y); if let LeafInsertState::Split(rleaf) = r { unsafe { assert!((*rleaf).slots() == 1); } Leaf::free(rleaf); } else { panic!(); } // Split left let r = leaf.insert_or_update(0, 0, 0); if let LeafInsertState::RevSplit(lleaf) = r { unsafe { assert!((*lleaf).slots() == 1); } Leaf::free(lleaf); } else { panic!(); } assert!(leaf.slots() == H_CAPACITY); assert!(leaf.verify()); Leaf::free(leaf as *mut _); assert_released(); } /* #[test] fn test_bptree_leaf_remove_lt() { // This is used in split off. // Remove none let leaf1: *mut Leaf = Node::new_leaf(1); let leaf1 = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { let _ = leaf1.insert_or_update(kv + 10, kv); } leaf1.remove_lt(&5); assert!(leaf1.slots() == H_CAPACITY); Leaf::free(leaf1 as *mut _); // Remove all let leaf2: *mut Leaf = Node::new_leaf(1); let leaf2 = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { let _ = leaf2.insert_or_update(kv + 10, kv); } leaf2.remove_lt(&(H_CAPACITY + 10)); assert!(leaf2.slots() == 0); Leaf::free(leaf2 as *mut _); // Remove from middle let leaf3: *mut Leaf = Node::new_leaf(1); let leaf3 = unsafe { &mut *leaf }; for kv in 0..H_CAPACITY { let _ = leaf3.insert_or_update(kv + 10, kv); } leaf3.remove_lt(&((H_CAPACITY / 2) + 10)); assert!(leaf3.slots() == (H_CAPACITY / 2)); Leaf::free(leaf3 as *mut _); // Remove less than not in leaf. let leaf4: *mut Leaf = Node::new_leaf(1); let leaf4 = unsafe { &mut *leaf }; let _ = leaf4.insert_or_update(5, 5); let _ = leaf4.insert_or_update(15, 15); leaf4.remove_lt(&10); assert!(leaf4.slots() == 1); // Add another and remove all. let _ = leaf4.insert_or_update(20, 20); leaf4.remove_lt(&25); assert!(leaf4.slots() == 0); Leaf::free(leaf4 as *mut _); // Done! assert_released(); } */ /* ============================================ */ // Branch tests here! #[test] fn test_hashmap2_node_branch_new() { // Create a new branch, and test it. let left: *mut Leaf = Node::new_leaf(1); let left_ref = unsafe { &mut *left }; let right: *mut Leaf = Node::new_leaf(1); let right_ref = unsafe { &mut *right }; // add kvs to l and r for kv in 0..H_CAPACITY { let lkv = kv + 10; let rkv = kv + 20; left_ref.insert_or_update(lkv as u64, lkv, lkv); right_ref.insert_or_update(rkv as u64, rkv, rkv); } // create branch let branch: *mut Branch = Node::new_branch( 1, left as *mut Node, right as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); // Test .min works on our descendants assert!(branch_ref.min() == 10); // Test .max works on our descendats. assert!(branch_ref.max() == (20 + H_CAPACITY - 1) as u64); // Get some k within the leaves. assert!(branch_ref.get_ref(11, &11) == Some(&11)); assert!(branch_ref.get_ref(21, &21) == Some(&21)); // get some k that is out of bounds. assert!(branch_ref.get_ref(1, &1).is_none()); assert!(branch_ref.get_ref(100, &100).is_none()); Leaf::free(left as *mut _); Leaf::free(right as *mut _); Branch::free(branch as *mut _); assert_released(); } // Helpers macro_rules! test_3_leaf { ($fun:expr) => {{ let a: *mut Leaf = Node::new_leaf(1); let b: *mut Leaf = Node::new_leaf(1); let c: *mut Leaf = Node::new_leaf(1); unsafe { (*a).insert_or_update(10, 10, 10); (*b).insert_or_update(20, 20, 20); (*c).insert_or_update(30, 30, 30); } $fun(a, b, c); Leaf::free(a as *mut _); Leaf::free(b as *mut _); Leaf::free(c as *mut _); assert_released(); }}; } #[test] fn test_hashmap2_node_branch_add_min() { // This pattern occurs with "revsplit" to help with reverse // ordered inserts. test_3_leaf!(|a, b, c| { // Add the max two to the branch let branch: *mut Branch = Node::new_branch( 1, b as *mut Node, c as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); // Now min node (uses a diff function!) let r = branch_ref.add_node_left(a as *mut Node, 0); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } #[test] fn test_hashmap2_node_branch_add_mid() { test_3_leaf!(|a, b, c| { // Add the outer two to the branch let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, c as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); let r = branch_ref.add_node(b as *mut Node); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } #[test] fn test_hashmap2_node_branch_add_max() { test_3_leaf!(|a, b, c| { // add the bottom two let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, b as *mut Node, ); let branch_ref = unsafe { &mut *branch }; // verify assert!(branch_ref.verify()); let r = branch_ref.add_node(c as *mut Node); match r { BranchInsertState::Ok => {} _ => debug_assert!(false), }; // Assert okay + verify assert!(branch_ref.verify()); Branch::free(branch as *mut _); }) } // Helpers macro_rules! test_max_leaf { ($fun:expr) => {{ let a: *mut Leaf = Node::new_leaf(1); let b: *mut Leaf = Node::new_leaf(1); let c: *mut Leaf = Node::new_leaf(1); let d: *mut Leaf = Node::new_leaf(1); let e: *mut Leaf = Node::new_leaf(1); let f: *mut Leaf = Node::new_leaf(1); let g: *mut Leaf = Node::new_leaf(1); let h: *mut Leaf = Node::new_leaf(1); unsafe { (*a).insert_or_update(10, 10, 10); (*b).insert_or_update(20, 20, 20); (*c).insert_or_update(30, 30, 30); (*d).insert_or_update(40, 40, 40); (*e).insert_or_update(50, 50, 50); (*f).insert_or_update(60, 60, 60); (*g).insert_or_update(70, 70, 70); (*h).insert_or_update(80, 80, 80); } let branch: *mut Branch = Node::new_branch( 1, a as *mut Node, b as *mut Node, ); let branch_ref = unsafe { &mut *branch }; branch_ref.add_node(c as *mut Node); branch_ref.add_node(d as *mut Node); branch_ref.add_node(e as *mut Node); branch_ref.add_node(f as *mut Node); branch_ref.add_node(g as *mut Node); branch_ref.add_node(h as *mut Node); assert!(branch_ref.slots() == H_CAPACITY); $fun(branch_ref, 80); // MUST NOT verify here, as it's a use after free of the tests inserted node! Branch::free(branch as *mut _); Leaf::free(a as *mut _); Leaf::free(b as *mut _); Leaf::free(c as *mut _); Leaf::free(d as *mut _); Leaf::free(e as *mut _); Leaf::free(f as *mut _); Leaf::free(g as *mut _); Leaf::free(h as *mut _); assert_released(); }}; } #[test] fn test_hashmap2_node_branch_add_split_min() { // Used in rev split } #[test] fn test_hashmap2_node_branch_add_split_mid() { test_max_leaf!(|branch_ref: &mut Branch, max: u64| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to H_CAPACITY, incs of 10 unsafe { (*node).insert_or_update(15, 15, 15); }; // Add in the middle let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(x, y) => { unsafe { assert!((*x).min() == max - 10); assert!((*y).min() == max); } // X, Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } #[test] fn test_hashmap2_node_branch_add_split_max() { test_max_leaf!(|branch_ref: &mut Branch, max: u64| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to H_CAPACITY, incs of 10 unsafe { (*node).insert_or_update(200, 200, 200); }; // Add in at the end. let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(y, mynode) => { unsafe { // println!("{:?}", (*y).min()); // println!("{:?}", (*mynode).min()); assert!((*y).min() == max); assert!((*mynode).min() == 200); } // Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } #[test] fn test_hashmap2_node_branch_add_split_n1max() { // Add one before the end! test_max_leaf!(|branch_ref: &mut Branch, max: u64| { let node: *mut Leaf = Node::new_leaf(1); // Branch already has up to H_CAPACITY, incs of 10 let x = (max - 5) as usize; unsafe { (*node).insert_or_update(max - 5, x, x); }; // Add in one before the end. let r = branch_ref.add_node(node as *mut _); match r { BranchInsertState::Split(mynode, y) => { unsafe { assert!((*mynode).min() == max - 5); assert!((*y).min() == max); } // Y will be freed by the macro caller. } _ => debug_assert!(false), }; assert!(branch_ref.verify()); // Free node. Leaf::free(node as *mut _); }) } } concread-0.4.6/src/internals/hashmap/simd.rs000064400000000000000000000175001046102023000171250ustar 00000000000000#[cfg(feature = "simd_support")] use core_simd::u64x8; use std::borrow::Borrow; use std::fmt::Debug; use std::hash::Hash; use super::node::{Branch, Leaf}; pub(crate) enum KeyLoc { Ok(usize, usize), Collision(usize), Missing(usize), } impl KeyLoc { pub(crate) fn ok(self) -> Option<(usize, usize)> { if let KeyLoc::Ok(a, b) = self { Some((a, b)) } else { None } } } #[cfg(not(feature = "simd_support"))] pub(crate) fn branch_simd_search(branch: &Branch, h: u64) -> Result where K: Hash + Eq + Clone + Debug, V: Clone, { debug_assert!(h < u64::MAX); for i in 0..branch.slots() { if h == unsafe { branch.ctrl.a.1[i] } { return Ok(i); } } for i in 0..branch.slots() { if h < unsafe { branch.ctrl.a.1[i] } { return Err(i); } } Err(branch.slots()) } #[cfg(feature = "simd_support")] pub(crate) fn branch_simd_search(branch: &Branch, h: u64) -> Result where K: Hash + Eq + Clone + Debug, V: Clone, { debug_assert!(h < u64::MAX); debug_assert!({ let want = u64x8::splat(u64::MAX); let r1 = want.lanes_eq(unsafe { *branch.ctrl.simd }); let mask = r1.to_bitmask()[0] & 0b1111_1110; match (mask, branch.slots()) { (0b1111_1110, 0) | (0b1111_1100, 1) | (0b1111_1000, 2) | (0b1111_0000, 3) | (0b1110_0000, 4) | (0b1100_0000, 5) | (0b1000_0000, 6) | (0b0000_0000, 7) => true, _ => { eprintln!("branch mask -> {:b}", mask); eprintln!("branch slots -> {:?}", branch.slots()); false } } }); let want = u64x8::splat(h); let r1 = want.lanes_eq(unsafe { *branch.ctrl.simd }); let mask = r1.to_bitmask()[0] & 0b1111_1110; match mask { 0b0000_0001 => unreachable!(), 0b0000_0010 => return Ok(0), 0b0000_0100 => return Ok(1), 0b0000_1000 => return Ok(2), 0b0001_0000 => return Ok(3), 0b0010_0000 => return Ok(4), 0b0100_0000 => return Ok(5), 0b1000_0000 => return Ok(6), 0b0000_0000 => {} _ => unreachable!(), }; let r2 = want.lanes_lt(unsafe { *branch.ctrl.simd }); let mask = r2.to_bitmask()[0] & 0b1111_1110; match mask { 0b1111_1110 => Err(0), 0b1111_1100 => Err(1), 0b1111_1000 => Err(2), 0b1111_0000 => Err(3), 0b1110_0000 => Err(4), 0b1100_0000 => Err(5), 0b1000_0000 => Err(6), 0b0000_0000 => Err(7), // Means something is out of order or invalid! _ => unreachable!(), } } /* #[cfg(not(feature = "simd_support"))] pub(crate) fn leaf_simd_get_slot(leaf: &Leaf, h: u64) -> Option where K: Hash + Eq + Clone + Debug, V: Clone, { debug_assert!(h < u64::MAX); for cand_idx in 0..leaf.slots() { if h == unsafe { leaf.ctrl.a.1[cand_idx] } { return Some(cand_idx); } } None } #[cfg(feature = "simd_support")] pub(crate) fn leaf_simd_get_slot(leaf: &Leaf, h: u64) -> Option where K: Hash + Eq + Clone + Debug, V: Clone, { // This is an important piece of logic! debug_assert!(h < u64::MAX); debug_assert!({ let want = u64x8::splat(u64::MAX); let r1 = want.lanes_eq(unsafe { *leaf.ctrl.simd }); let mask = r1.to_bitmask()[0] & 0b1111_1110; match (mask, leaf.slots()) { (0b1111_1110, 0) | (0b1111_1100, 1) | (0b1111_1000, 2) | (0b1111_0000, 3) | (0b1110_0000, 4) | (0b1100_0000, 5) | (0b1000_0000, 6) | (0b0000_0000, 7) => true, _ => false, } }); let want = u64x8::splat(h); let r1 = want.lanes_eq(unsafe { *leaf.ctrl.simd }); // println!("want: {:?}", want); // println!("ctrl: {:?}", unsafe { *leaf.ctrl.simd }); // Always discard the meta field let mask = r1.to_bitmask()[0] & 0b1111_1110; // println!("res eq: 0b{:b}", mask); if mask != 0 { // Something was equal let cand_idx = match mask { // 0b0000_0001 => {}, 0b0000_0010 => 0, 0b0000_0100 => 1, 0b0000_1000 => 2, 0b0001_0000 => 3, 0b0010_0000 => 4, 0b0100_0000 => 5, 0b1000_0000 => 6, _ => unreachable!(), }; return Some(cand_idx); } None } */ #[cfg(not(feature = "simd_support"))] pub(crate) fn leaf_simd_search(leaf: &Leaf, h: u64, k: &Q) -> KeyLoc where K: Hash + Eq + Clone + Debug + Borrow, V: Clone, Q: Eq + ?Sized, { debug_assert!(h < u64::MAX); for cand_idx in 0..leaf.slots() { if h == unsafe { leaf.ctrl.a.1[cand_idx] } { let bucket = unsafe { (*leaf.values[cand_idx].as_ptr()).as_slice() }; for (i, d) in bucket.iter().enumerate() { if k.eq(d.k.borrow()) { return KeyLoc::Ok(cand_idx, i); } } // Wasn't found despite all the collisions, err. return KeyLoc::Collision(cand_idx); } } for i in 0..leaf.slots() { if h < unsafe { leaf.ctrl.a.1[i] } { return KeyLoc::Missing(i); } } KeyLoc::Missing(leaf.slots()) } #[cfg(feature = "simd_support")] pub(crate) fn leaf_simd_search(leaf: &Leaf, h: u64, k: &Q) -> KeyLoc where K: Hash + Eq + Clone + Debug + Borrow, V: Clone, Q: Eq + ?Sized, { // This is an important piece of logic! debug_assert!(h < u64::MAX); debug_assert!({ let want = u64x8::splat(u64::MAX); let r1 = want.lanes_eq(unsafe { *leaf.ctrl.simd }); let mask = r1.to_bitmask()[0] & 0b1111_1110; match (mask, leaf.slots()) { (0b1111_1110, 0) | (0b1111_1100, 1) | (0b1111_1000, 2) | (0b1111_0000, 3) | (0b1110_0000, 4) | (0b1100_0000, 5) | (0b1000_0000, 6) | (0b0000_0000, 7) => true, _ => false, } }); let want = u64x8::splat(h); let r1 = want.lanes_eq(unsafe { *leaf.ctrl.simd }); // println!("want: {:?}", want); // println!("ctrl: {:?}", unsafe { *leaf.ctrl.simd }); // Always discard the meta field let mask = r1.to_bitmask()[0] & 0b1111_1110; // println!("res eq: 0b{:b}", mask); if mask != 0 { // Something was equal let cand_idx = match mask { // 0b0000_0001 => {}, 0b0000_0010 => 0, 0b0000_0100 => 1, 0b0000_1000 => 2, 0b0001_0000 => 3, 0b0010_0000 => 4, 0b0100_0000 => 5, 0b1000_0000 => 6, _ => unreachable!(), }; // Search in the bucket. Generally this is inlined and one element. let bucket = unsafe { (*leaf.values[cand_idx].as_ptr()).as_slice() }; for (i, d) in bucket.iter().enumerate() { if k.eq(d.k.borrow()) { return KeyLoc::Ok(cand_idx, i); } } // Wasn't found despite all the collisions, err. return KeyLoc::Collision(cand_idx); } let r2 = want.lanes_lt(unsafe { *leaf.ctrl.simd }); // Always discard the meta field let mask = r2.to_bitmask()[0] & 0b1111_1110; // println!("res lt: 0b{:b}", mask); let r = match mask { 0b1111_1110 => 0, 0b1111_1100 => 1, 0b1111_1000 => 2, 0b1111_0000 => 3, 0b1110_0000 => 4, 0b1100_0000 => 5, 0b1000_0000 => 6, 0b0000_0000 => 7, // Means something is out of order or invalid! _ => unreachable!(), }; KeyLoc::Missing(r) } concread-0.4.6/src/internals/hashmap/states.rs000064400000000000000000000047011046102023000174730ustar 00000000000000use super::node::{Leaf, Node}; use std::fmt::Debug; use std::hash::Hash; #[derive(Debug)] pub(crate) enum LeafInsertState where K: Hash + Eq + Clone + Debug, V: Clone, { Ok(Option), // Split(K, V), Split(*mut Leaf), // We split in the reverse direction. RevSplit(*mut Leaf), } #[derive(Debug)] pub(crate) enum LeafRemoveState where V: Clone, { Ok(Option), // Indicate that we found the associated value, but this // removal means we no longer exist so should be removed. Shrink(Option), } #[derive(Debug)] pub(crate) enum BranchInsertState where K: Hash + Eq + Clone + Debug, V: Clone, { Ok, // Two nodes that need addition to a new branch? Split(*mut Node, *mut Node), } #[derive(Debug)] pub(crate) enum BranchShrinkState where K: Hash + Eq + Clone + Debug, V: Clone, { Balanced, Merge(*mut Node), Shrink(*mut Node), } /* #[derive(Debug)] pub(crate) enum BranchTrimState where K: Hash + Eq + Clone + Debug, V: Clone, { Complete, Promote(*mut Node), } */ /* pub(crate) enum CRTrimState where K: Hash + Eq + Clone + Debug, V: Clone, { Complete, Clone(*mut Node), Promote(*mut Node), } */ #[derive(Debug)] pub(crate) enum CRInsertState where K: Hash + Eq + Clone + Debug, V: Clone, { // We did not need to clone, here is the result. NoClone(Option), // We had to clone the referenced node provided. Clone(Option, *mut Node), // We had to split, but did not need a clone. // REMEMBER: In all split cases it means the key MUST NOT have // previously existed, so it implies return none to the // caller. Split(*mut Node), RevSplit(*mut Node), // We had to clone and split. CloneSplit(*mut Node, *mut Node), CloneRevSplit(*mut Node, *mut Node), } #[derive(Debug)] pub(crate) enum CRCloneState where K: Hash + Eq + Clone + Debug, V: Clone, { Clone(*mut Node), NoClone, } #[derive(Debug)] pub(crate) enum CRRemoveState where K: Hash + Eq + Clone + Debug, V: Clone, { // We did not need to clone, here is the result. NoClone(Option), // We had to clone the referenced node provided. Clone(Option, *mut Node), // Shrink(Option), // CloneShrink(Option, *mut Node), } concread-0.4.6/src/internals/hashtrie/cursor.rs000064400000000000000000001157311046102023000177010ustar 00000000000000//! The cursor is what actually knits a trie together from the parts //! we have, and has an important role to keep the system consistent. //! //! Additionally, the cursor also is responsible for general movement //! throughout the structure and how to handle that effectively #![allow(unstable_name_collisions)] use sptr::Strict; use crate::internals::lincowcell::LinCowCellCapable; use std::borrow::Borrow; use std::cmp::Ordering; use std::collections::{BTreeSet, VecDeque}; use std::fmt::{self, Debug}; use std::marker::PhantomData; use std::ptr; use std::sync::Mutex; use smallvec::SmallVec; use super::iter::*; #[cfg(feature = "ahash")] use ahash::RandomState; #[cfg(not(feature = "ahash"))] use std::collections::hash_map::RandomState; use std::hash::{BuildHasher, Hash, Hasher}; // This defines the max height of our tree. Gives 16777216.0 entries // This only consumes 16KB if fully populated #[cfg(feature = "hashtrie_skinny")] pub(crate) const MAX_HEIGHT: u64 = 8; // The true absolute max height #[cfg(feature = "hashtrie_skinny")] #[cfg(test)] const ABS_MAX_HEIGHT: u64 = 21; #[cfg(feature = "hashtrie_skinny")] pub(crate) const HT_CAPACITY: usize = 8; #[cfg(feature = "hashtrie_skinny")] const HASH_MASK: u64 = 0b0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0111; #[cfg(feature = "hashtrie_skinny")] const SHIFT: u64 = 3; // This defines the max height of our tree. Gives 16777216.0 entries #[cfg(not(feature = "hashtrie_skinny"))] pub(crate) const MAX_HEIGHT: u64 = 6; #[cfg(not(feature = "hashtrie_skinny"))] #[cfg(test)] const ABS_MAX_HEIGHT: u64 = 16; #[cfg(not(feature = "hashtrie_skinny"))] pub(crate) const HT_CAPACITY: usize = 16; #[cfg(not(feature = "hashtrie_skinny"))] const HASH_MASK: u64 = 0b0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_0000_1111; #[cfg(not(feature = "hashtrie_skinny"))] const SHIFT: u64 = 4; const TAG: usize = 0b0011; const UNTAG: usize = usize::MAX - TAG; // const FLAG_CLEAN: usize = 0b00; const FLAG_DIRTY: usize = 0b01; const MARK_CLEAN: usize = usize::MAX - FLAG_DIRTY; const FLAG_BRANCH: usize = 0b00; const FLAG_BUCKET: usize = 0b10; const DEFAULT_BUCKET_ALLOC: usize = 1; macro_rules! hash_key { ($self:expr, $k:expr) => {{ let mut hasher = $self.build_hasher.build_hasher(); $k.hash(&mut hasher); hasher.finish() }}; } #[cfg(all(test, not(miri)))] thread_local!(static ALLOC_LIST: Mutex> = const { Mutex::new(BTreeSet::new()) }); #[cfg(all(test, not(miri)))] thread_local!(static WRITE_LIST: Mutex> = const { Mutex::new(BTreeSet::new()) }); #[cfg(test)] fn assert_released() { #[cfg(not(miri))] { let is_empty = ALLOC_LIST.with(|llist| { let x = llist.lock().unwrap(); println!("Remaining -> {:?}", x); x.is_empty() }); assert!(is_empty); } } #[derive(Clone, Copy)] pub(crate) struct Ptr { // We essential are using this as a void pointer for provenance reasons. p: *mut i32, } impl PartialEq for Ptr { fn eq(&self, other: &Self) -> bool { let s = self.p.map_addr(|a| a & UNTAG); let o = other.p.map_addr(|a| a & UNTAG); s == o } } impl Eq for Ptr {} impl PartialOrd for Ptr { fn partial_cmp(&self, other: &Self) -> Option { Some(self.cmp(other)) } } impl Ord for Ptr { fn cmp(&self, other: &Self) -> Ordering { let s = self.p.map_addr(|a| a & UNTAG); let o = other.p.map_addr(|a| a & UNTAG); s.cmp(&o) } } impl Debug for Ptr { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.debug_struct("Ptr") .field("p", &self.p) .field("bucket", &self.is_bucket()) .field("dirty", &self.is_dirty()) .field("null", &self.is_null()) .finish() } } impl Ptr { fn null_mut() -> Self { debug_assert!(std::mem::size_of::() == std::mem::size_of::<*mut Branch>()); debug_assert!(std::mem::size_of::() == std::mem::size_of::<*mut Bucket>()); Ptr { p: ptr::null_mut() } } #[inline(always)] pub(crate) fn is_null(&self) -> bool { self.p.is_null() } #[inline(always)] pub(crate) fn is_bucket(&self) -> bool { self.p.addr() & FLAG_BUCKET == FLAG_BUCKET } #[inline(always)] pub(crate) fn is_branch(&self) -> bool { self.p.addr() & FLAG_BUCKET != FLAG_BUCKET } #[inline(always)] fn is_dirty(&self) -> bool { self.p.addr() & FLAG_DIRTY == FLAG_DIRTY } #[cfg(all(test, not(miri)))] fn untagged(&self) -> Self { let p = self.p.map_addr(|a| a & UNTAG); Ptr { p } } #[inline(always)] fn mark_dirty(&mut self) { #[cfg(all(test, not(miri)))] WRITE_LIST.with(|llist| assert!(llist.lock().unwrap().insert(self.untagged()))); self.p = self.p.map_addr(|a| a | FLAG_DIRTY) } #[inline(always)] fn mark_clean(&mut self) { #[cfg(all(test, not(miri)))] WRITE_LIST.with(|llist| assert!(llist.lock().unwrap().remove(&(self.untagged())))); self.p = self.p.map_addr(|a| a & MARK_CLEAN) } #[inline(always)] pub(crate) fn as_bucket(&self) -> &Bucket { debug_assert!(self.is_bucket()); #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().contains(&self.untagged()))); unsafe { &*(self.p.map_addr(|a| a & UNTAG) as *const Bucket) } } #[inline(always)] fn as_bucket_raw(&self) -> *mut Bucket { debug_assert!(self.is_bucket()); #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().contains(&self.untagged()))); self.p.map_addr(|a| a & UNTAG) as *mut Bucket } #[inline(always)] #[allow(clippy::mut_from_ref)] pub(crate) fn as_bucket_mut( &self, ) -> &mut Bucket { debug_assert!(self.is_bucket()); debug_assert!(self.is_dirty()); #[cfg(all(test, not(miri)))] WRITE_LIST.with(|llist| { let wlist_guard = llist.lock().unwrap(); assert!(wlist_guard.contains(&self.untagged())) }); unsafe { &mut *(self.p.map_addr(|a| a & UNTAG) as *mut Bucket) } } #[inline(always)] pub(crate) fn as_branch(&self) -> &Branch { debug_assert!(self.is_branch()); #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().contains(&self.untagged()))); unsafe { &*(self.p.map_addr(|a| a & UNTAG) as *const Branch) } } #[inline(always)] fn as_branch_raw(&self) -> *mut Branch { debug_assert!(self.is_branch()); #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().contains(&self.untagged()))); self.p.map_addr(|a| a & UNTAG) as *mut Branch } #[inline(always)] #[allow(clippy::mut_from_ref)] pub(crate) fn as_branch_mut( &self, ) -> &mut Branch { debug_assert!(self.is_branch()); debug_assert!(self.is_dirty()); #[cfg(all(test, not(miri)))] WRITE_LIST.with(|llist| { let wlist_guard = llist.lock().unwrap(); assert!(wlist_guard.contains(&self.untagged())) }); unsafe { &mut *(self.p.map_addr(|a| a & UNTAG) as *mut Branch) } } #[inline(always)] #[allow(clippy::mut_from_ref)] unsafe fn as_branch_mut_nock( &self, ) -> &mut Branch { debug_assert!(self.is_branch()); debug_assert!(self.is_dirty()); // This is the same as above, but bypasses the wlist check. &mut *(self.p.map_addr(|a| a & UNTAG) as *mut Branch) } fn free(&self) { // We MUST have allocated this, else it's a double free #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().contains(&self.untagged()))); // It's getting freeeeeedddd unsafe { if self.is_bucket() { let _ = Box::from_raw(self.as_bucket_raw::()); } else { let _ = Box::from_raw(self.as_branch_raw::()); } } #[cfg(all(test, not(miri)))] if self.is_dirty() { WRITE_LIST.with(|llist| assert!(llist.lock().unwrap().remove(&(self.untagged())))) }; #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| assert!(llist.lock().unwrap().remove(&self.untagged()))); } } impl From>> for Ptr { fn from(b: Box>) -> Self { let rptr: *mut Branch = Box::into_raw(b); #[allow(clippy::let_and_return)] let r = Self { p: rptr.map_addr(|a| a | FLAG_BRANCH) as *mut i32, }; #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| llist.lock().unwrap().insert(r.untagged())); r } } impl From>> for Ptr { fn from(b: Box>) -> Self { let rptr: *mut Bucket = Box::into_raw(b); #[allow(clippy::let_and_return)] let r = Self { p: rptr.map_addr(|a| a | FLAG_BUCKET) as *mut i32, }; #[cfg(all(test, not(miri)))] ALLOC_LIST.with(|llist| llist.lock().unwrap().insert(r.untagged())); r } } /// A stored K/V in the hash bucket. #[derive(Clone)] pub struct Datum where K: Hash + Eq + Clone + Debug, V: Clone, { /// The hash of K. pub h: u64, /// The K in K:V. pub k: K, /// The V in K:V. pub v: V, } type Bucket = SmallVec<[Datum; DEFAULT_BUCKET_ALLOC]>; fn new_dirty_bucket_ptr() -> Ptr { let bkt: Box> = Box::new(SmallVec::new()); let mut p = Ptr::from(bkt); p.mark_dirty(); p } #[repr(align(64))] pub(crate) struct Branch where K: Hash + Eq + Clone + Debug, V: Clone, { // Pointer to either a Branch, or a Bucket. pub nodes: [Ptr; HT_CAPACITY], k: PhantomData, v: PhantomData, } fn new_dirty_branch_ptr() -> Ptr { let brch: Box> = Branch::new(); let mut p = Ptr::from(brch); p.mark_dirty(); debug_assert!(p.is_dirty()); debug_assert!(p.is_branch()); p } impl Branch { fn new() -> Box { debug_assert!(std::mem::size_of::() == std::mem::size_of::<*mut Branch>()); Box::new(Branch { nodes: [ Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), #[cfg(not(feature = "hashtrie_skinny"))] Ptr::null_mut(), ], k: PhantomData, v: PhantomData, }) } fn clone_dirty(&self) -> Ptr { let bc: Box> = Box::new(Branch { nodes: self.nodes, k: PhantomData, v: PhantomData, }); let mut p = Ptr::from(bc); p.mark_dirty(); p } } #[derive(Debug)] pub(crate) struct SuperBlock where K: Hash + Eq + Clone + Debug, V: Clone, { root: Ptr, length: usize, txid: u64, build_hasher: RandomState, k: PhantomData, v: PhantomData, } impl SuperBlock { /// 🔥 🔥 🔥 pub unsafe fn new() -> Self { #[cfg(debug)] assert!(MAX_HEIGHT <= ABS_MAX_HEIGHT); let b: Box> = Branch::new(); let root = Ptr::from(b); SuperBlock { root, length: 0, txid: 1, build_hasher: RandomState::new(), k: PhantomData, v: PhantomData, } } } impl LinCowCellCapable, CursorWrite> for SuperBlock { fn create_reader(&self) -> CursorRead { CursorRead::new(self) } fn create_writer(&self) -> CursorWrite { CursorWrite::new(self) } fn pre_commit( &mut self, mut new: CursorWrite, prev: &CursorRead, ) -> CursorRead { let mut prev_last_seen = prev.last_seen.lock().unwrap(); debug_assert!((*prev_last_seen).is_empty()); let new_last_seen = &mut new.last_seen; std::mem::swap(&mut (*prev_last_seen), &mut (*new_last_seen)); debug_assert!((*new_last_seen).is_empty()); // Mark anything in the tree that is dirty as clean. new.mark_clean(); // Now when the lock is dropped, both sides see the correct info and garbage for drops. // Clear first seen, we won't be dropping them from here. new.first_seen.clear(); self.root = new.root; self.length = new.length; self.txid = new.txid; debug_assert!(!self.root.is_dirty()); debug_assert!(self.root.is_branch()); // Create the new reader. CursorRead::new(self) } } impl Drop for SuperBlock { fn drop(&mut self) { // eprintln!("Releasing SuperBlock ..."); // We must be the last SB and no txns exist. Drop the tree now. // TODO: Calc this based on size. let mut first_seen = Vec::with_capacity(16); let mut stack = VecDeque::new(); stack.push_back(self.root); while let Some(tgt_ptr) = stack.pop_front() { first_seen.push(tgt_ptr); if tgt_ptr.is_branch() { for n in tgt_ptr.as_branch::().nodes.iter() { if !n.is_null() { stack.push_back(*n); } } } } first_seen.iter().for_each(|p| p.free::()); } } pub(crate) trait CursorReadOps { fn get_root_ptr(&self) -> Ptr; fn len(&self) -> usize; fn get_txid(&self) -> u64; fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized; fn search(&self, h: u64, k: &Q) -> Option<&V> where K: Borrow, Q: Hash + Eq + ?Sized, { let mut node = self.get_root_ptr(); for d in 0..MAX_HEIGHT { let bref: &Branch = node.as_branch(); let idx = ((h & (HASH_MASK << (d * SHIFT))) >> (d * SHIFT)) as usize; debug_assert!(idx < HT_CAPACITY); let tgt_ptr = bref.nodes[idx]; // If null if tgt_ptr.is_null() { // Not found return None; } else if tgt_ptr.is_branch() { node = tgt_ptr; } else { for datum in tgt_ptr.as_bucket::().iter() { if datum.h == h && k.eq(datum.k.borrow()) { // Must be it! let x = &datum.v as *const V; return Some(unsafe { &*x as &V }); } } // Not found. return None; } } unreachable!(); } fn kv_iter(&self) -> Iter { Iter::new(self.get_root_ptr(), self.len()) } fn k_iter(&self) -> KeyIter { KeyIter::new(self.get_root_ptr(), self.len()) } fn v_iter(&self) -> ValueIter { ValueIter::new(self.get_root_ptr(), self.len()) } #[allow(unused)] fn verify_inner(&self, expect_clean: bool) { let root = self.get_root_ptr(); assert!(root.is_branch()); let mut stack = VecDeque::new(); let mut ptr_map = BTreeSet::new(); let mut length = 0; stack.push_back(root); while let Some(tgt_ptr) = stack.pop_front() { // Is true if not present. Every ptr should be unique! assert!(ptr_map.insert(tgt_ptr)); if expect_clean { assert!(!tgt_ptr.is_dirty()); } if tgt_ptr.is_branch() { // For all nodes for n in tgt_ptr.as_branch::().nodes.iter() { if !n.is_null() { stack.push_back(*n); } } } else { assert!(tgt_ptr.is_bucket()); // How long? length += tgt_ptr.as_bucket::().len(); } } // Is our element tracker correct? assert!(length == self.len()); } #[allow(unused)] fn verify(&self); } #[derive(Debug)] pub(crate) struct CursorWrite where K: Hash + Eq + Clone + Debug, V: Clone, { txid: u64, length: usize, root: Ptr, last_seen: Vec, first_seen: Vec, build_hasher: RandomState, k: PhantomData, v: PhantomData, } impl CursorWrite { pub(crate) fn new(sblock: &SuperBlock) -> Self { let txid = sblock.txid + 1; let length = sblock.length; let root = sblock.root; let last_seen = Vec::with_capacity(16); let first_seen = Vec::with_capacity(16); let build_hasher = sblock.build_hasher.clone(); CursorWrite { txid, length, root, last_seen, first_seen, build_hasher, k: PhantomData, v: PhantomData, } } fn dirty_root(&mut self) { // If needed, clone root and mark dirty. Swap back into the // cursor. debug_assert!(self.root.is_branch()); if !self.root.is_dirty() { let clean_bref: &Branch = self.root.as_branch(); self.last_seen.push(self.root); // Over-write the ptr with our new ptr. self.root = clean_bref.clone_dirty(); self.first_seen.push(self.root); } } fn mark_clean(&mut self) { let root = self.get_root_ptr(); assert!(root.is_branch()); let mut stack: VecDeque = VecDeque::new(); if root.is_dirty() { stack.push_back(root); self.root.mark_clean(); } while let Some(tgt_ptr) = stack.pop_front() { // For all nodes for n in unsafe { tgt_ptr.as_branch_mut_nock::().nodes.iter_mut() } { if !n.is_null() && n.is_dirty() { if n.is_branch() { stack.push_back(*n); } n.mark_clean(); } } if cfg!(debug_assertions) { for n in tgt_ptr.as_branch::().nodes.iter() { assert!(n.is_null() || !n.is_dirty()); } } } } pub(crate) fn insert(&mut self, h: u64, k: K, mut v: V) -> Option { self.dirty_root(); let mut node = self.root; for d in 0..MAX_HEIGHT { // In the current node let bref: &mut Branch = node.as_branch_mut(); // Get our idx from the node. let shift = d * SHIFT; let idx = ((h & (HASH_MASK << shift)) >> shift) as usize; debug_assert!(idx < HT_CAPACITY); let tgt_ptr = bref.nodes[idx]; // If null if tgt_ptr.is_null() { let dbkt_ptr = new_dirty_bucket_ptr::(); self.first_seen.push(dbkt_ptr); // Insert the item. dbkt_ptr.as_bucket_mut().push(Datum { h, k, v }); // Place the dbkt_ptr into the branch bref.nodes[idx] = dbkt_ptr; // Correct the size. self.length += 1; // Done! return None; } else if tgt_ptr.is_branch() { if tgt_ptr.is_dirty() { node = tgt_ptr; } else { self.last_seen.push(tgt_ptr); let from_bref: &Branch = tgt_ptr.as_branch(); // Over-write the ptr with our new ptr. let nbrch_ptr = from_bref.clone_dirty(); self.first_seen.push(nbrch_ptr); bref.nodes[idx] = nbrch_ptr; // next ptr is our new branch, we let the loop continue. node = nbrch_ptr; } } else { // if bucket if d == (MAX_HEIGHT - 1) { let bkt_ptr = if tgt_ptr.is_dirty() { // If the bkt is dirty, we can just append. tgt_ptr } else { self.last_seen.push(tgt_ptr); let dbkt_ptr = new_dirty_bucket_ptr::(); self.first_seen.push(dbkt_ptr); let dbkt = dbkt_ptr.as_bucket_mut::(); tgt_ptr.as_bucket().iter().for_each(|datum| { dbkt.push(datum.clone()); }); bref.nodes[idx] = dbkt_ptr; dbkt_ptr }; // Handle duplicate K? let bkt = bkt_ptr.as_bucket_mut::(); for datum in bkt.iter_mut() { if datum.h == h && k.eq(datum.k.borrow()) { // Collision, swap and replace. std::mem::swap(&mut datum.v, &mut v); return Some(v); } } // Wasn't found, append. self.length += 1; bkt.push(Datum { h, k, v }); return None; } else { let bkt_ptr = if tgt_ptr.is_dirty() { // If the bkt is dirty, we can just re-locate it tgt_ptr } else { // The logic for if a bucket can be n>1 // isn't added! debug_assert!(tgt_ptr.as_bucket::().len() == 1); // If it's clean, we need to duplicate it. self.last_seen.push(tgt_ptr); let dbkt_ptr = new_dirty_bucket_ptr::(); self.first_seen.push(dbkt_ptr); let dbkt = dbkt_ptr.as_bucket_mut::(); tgt_ptr.as_bucket().iter().for_each(|datum| { dbkt.push(datum.clone()); }); bref.nodes[idx] = dbkt_ptr; dbkt_ptr }; // create new branch, and insert the bucket. let nbrch_ptr = new_dirty_branch_ptr::(); self.first_seen.push(nbrch_ptr); // Locate where in the new branch we need to relocate // our bucket. let bh = bkt_ptr.as_bucket_mut::()[0].h; let shift = (d + 1) * SHIFT; let bidx = ((bh & (HASH_MASK << shift)) >> shift) as usize; debug_assert!(bidx < HT_CAPACITY); nbrch_ptr.as_branch_mut::().nodes[bidx] = bkt_ptr; bref.nodes[idx] = nbrch_ptr; // next ptr is our new branch, we let the loop continue. node = nbrch_ptr; } } } unreachable!(); } pub(crate) fn remove(&mut self, h: u64, k: &K) -> Option { self.dirty_root(); let mut node = self.root; for d in 0..MAX_HEIGHT { debug_assert!(node.is_dirty()); debug_assert!(node.is_branch()); // In the current node let bref: &mut Branch = node.as_branch_mut(); // Get our idx from the node. let shift = d * SHIFT; let idx = ((h & (HASH_MASK << shift)) >> shift) as usize; debug_assert!(idx < HT_CAPACITY); let tgt_ptr = bref.nodes[idx]; // If null if tgt_ptr.is_null() { // Done! return None; } else if tgt_ptr.is_branch() { if tgt_ptr.is_dirty() { node = tgt_ptr; } else { self.last_seen.push(tgt_ptr); let from_bref: &Branch = tgt_ptr.as_branch(); // Over-write the ptr with our new ptr. let nbrch_ptr = from_bref.clone_dirty(); self.first_seen.push(nbrch_ptr); bref.nodes[idx] = nbrch_ptr; // next ptr is our new branch, we let the loop continue. node = nbrch_ptr; } } else { // if bucket // Fast path - if the tgt is len 1, we can just remove it. debug_assert!(!tgt_ptr.as_bucket::().is_empty()); if tgt_ptr.as_bucket::().len() == 1 { let tgt_bkt = tgt_ptr.as_bucket::(); let datum = &tgt_bkt[0]; if datum.h == h && k.eq(datum.k.borrow()) { bref.nodes[idx] = Ptr::null_mut(); // There is a bit of a difficult case here. If a pointer // is dirty, then it must also have been allocated in // this txn. It's possible we risk leaking the memory // here since the node will be in first_seen, and if we // commit after a insert + remove + insert then the node // risks orphaning. However, this also leads to a possible // double free since if we free here AND a rollback occurs // then we haven't dropped the node. // // So this is a sticky situation - As a result we have to // walk the first_seen and remove this element before we // do this out-of-band free. let v = if tgt_ptr.is_dirty() { let tgt_bkt_mut = tgt_ptr.as_bucket_mut::(); let Datum { v, .. } = tgt_bkt_mut.remove(0); // Keep any pointer that ISNT the one we are oob freeing. self.first_seen.retain(|e| *e != tgt_ptr); tgt_ptr.free::(); v } else { self.last_seen.push(tgt_ptr); datum.v.clone() }; self.length -= 1; return Some(v); } else { return None; } } else { let bkt_ptr = if tgt_ptr.is_dirty() { // If the bkt is dirty, we can just manipulate it. tgt_ptr } else { self.last_seen.push(tgt_ptr); let dbkt_ptr = new_dirty_bucket_ptr::(); self.first_seen.push(dbkt_ptr); let dbkt = dbkt_ptr.as_bucket_mut::(); tgt_ptr.as_bucket().iter().for_each(|datum| { dbkt.push(datum.clone()); }); bref.nodes[idx] = dbkt_ptr; dbkt_ptr }; // Handle duplicate K? let bkt = bkt_ptr.as_bucket_mut::(); for (i, datum) in bkt.iter().enumerate() { if datum.h == h && k.eq(datum.k.borrow()) { // Found, remove. let Datum { v, .. } = bkt.remove(i); self.length -= 1; return Some(v); } } return None; } } } unreachable!(); } pub(crate) unsafe fn get_slot_mut_ref(&mut self, h: u64) -> Option<&mut [Datum]> { self.dirty_root(); let mut node = self.root; for d in 0..MAX_HEIGHT { debug_assert!(node.is_dirty()); debug_assert!(node.is_branch()); // In the current node let bref: &mut Branch = node.as_branch_mut(); // Get our idx from the node. let shift = d * SHIFT; let idx = ((h & (HASH_MASK << shift)) >> shift) as usize; debug_assert!(idx < HT_CAPACITY); let tgt_ptr = bref.nodes[idx]; // If null if tgt_ptr.is_null() { // Done! return None; } else if tgt_ptr.is_branch() { if tgt_ptr.is_dirty() { node = tgt_ptr; } else { self.last_seen.push(tgt_ptr); let from_bref: &Branch = tgt_ptr.as_branch(); // Over-write the ptr with our new ptr. let nbrch_ptr = from_bref.clone_dirty(); self.first_seen.push(nbrch_ptr); bref.nodes[idx] = nbrch_ptr; // next ptr is our new branch, we let the loop continue. node = nbrch_ptr; } } else { debug_assert!(!tgt_ptr.as_bucket::().is_empty()); let bkt_ptr = if tgt_ptr.is_dirty() { // If the bkt is dirty, we can just manipulate it. tgt_ptr } else { self.last_seen.push(tgt_ptr); let dbkt_ptr = new_dirty_bucket_ptr::(); self.first_seen.push(dbkt_ptr); let dbkt = dbkt_ptr.as_bucket_mut::(); tgt_ptr.as_bucket().iter().for_each(|datum| { dbkt.push(datum.clone()); }); bref.nodes[idx] = dbkt_ptr; dbkt_ptr }; // Handle duplicate K? let bkt = bkt_ptr.as_bucket_mut::(); let x = bkt.as_mut_slice() as *mut [Datum]; return Some(&mut *x as &mut [Datum]); } } unreachable!(); } pub(crate) fn get_mut_ref(&mut self, h: u64, k: &K) -> Option<&mut V> { unsafe { self.get_slot_mut_ref(h) }.and_then(|bkt| { bkt.iter_mut() .filter_map(|datum| { if datum.h == h && k.eq(datum.k.borrow()) { let x = &mut datum.v as *mut V; Some(unsafe { &mut *x as &mut V }) } else { None } }) .next() }) } pub(crate) fn clear(&mut self) { // First clear last_seen, since we can't have any duplicates. // self.last_seen.clear(); // self.first_seen.clear(); let mut stack = VecDeque::new(); stack.push_back(self.root); while let Some(tgt_ptr) = stack.pop_front() { self.last_seen.push(tgt_ptr); if tgt_ptr.is_branch() { for n in tgt_ptr.as_branch::().nodes.iter() { if !n.is_null() { self.last_seen.push(*n); } } } } // Now make a new root. let b: Box> = Branch::new(); self.root = Ptr::from(b); self.root.mark_dirty(); self.length = 0; } } impl Extend<(K, V)> for CursorWrite { fn extend>(&mut self, iter: I) { iter.into_iter().for_each(|(k, v)| { let h = self.hash_key(&k); let _ = self.insert(h, k, v); }); } } impl Drop for CursorWrite { fn drop(&mut self) { self.first_seen.iter().for_each(|p| p.free::()) } } impl CursorReadOps for CursorWrite { fn get_root_ptr(&self) -> Ptr { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { hash_key!(self, k) } fn verify(&self) { self.verify_inner(false); } } #[derive(Debug)] pub(crate) struct CursorRead where K: Hash + Eq + Clone + Debug, V: Clone, { txid: u64, length: usize, root: Ptr, last_seen: Mutex>, build_hasher: RandomState, k: PhantomData, v: PhantomData, } impl CursorRead { pub(crate) fn new(sblock: &SuperBlock) -> Self { let build_hasher = sblock.build_hasher.clone(); CursorRead { txid: sblock.txid, length: sblock.length, root: sblock.root, last_seen: Mutex::new(Vec::with_capacity(0)), build_hasher, k: PhantomData, v: PhantomData, } } } impl Drop for CursorRead { fn drop(&mut self) { let last_seen_guard = self .last_seen .try_lock() .expect("Unable to lock, something is horridly wrong!"); last_seen_guard.iter().for_each(|p| p.free::()); std::mem::drop(last_seen_guard); } } impl CursorReadOps for CursorRead { fn get_root_ptr(&self) -> Ptr { self.root } fn len(&self) -> usize { self.length } fn get_txid(&self) -> u64 { self.txid } fn hash_key(&self, k: &Q) -> u64 where K: Borrow, Q: Hash + Eq + ?Sized, { hash_key!(self, k) } fn verify(&self) { self.verify_inner(true); } } #[cfg(test)] mod tests { use super::*; #[test] fn test_hashtrie_cursor_basic() { let sb: SuperBlock = unsafe { SuperBlock::new() }; let mut wr = sb.create_writer(); assert!(wr.len() == 0); assert!(wr.search(0, &0).is_none()); assert!(wr.insert(0, 0, 0).is_none()); assert!(wr.len() == 1); assert!(wr.search(0, &0).is_some()); assert!(wr.search(1, &1).is_none()); assert!(wr.insert(1, 1, 0).is_none()); assert!(wr.search(1, &1).is_some()); assert!(wr.len() == 2); std::mem::drop(wr); std::mem::drop(sb); assert_released(); } #[test] fn test_hashtrie_cursor_insert_max_depth() { let mut sb: SuperBlock = unsafe { SuperBlock::new() }; let rdr = sb.create_reader(); let mut wr = sb.create_writer(); assert!(wr.len() == 0); for i in 0..(ABS_MAX_HEIGHT * 2) { // This pretty much stresses every (dirty) insert case. assert!(wr.insert(0, i, i).is_none()); wr.verify(); } assert!(wr.len() == (ABS_MAX_HEIGHT * 2) as usize); for i in 0..(ABS_MAX_HEIGHT * 2) { assert!(wr.search(0, &i).is_some()); } for i in 0..(ABS_MAX_HEIGHT * 2) { assert!(wr.remove(0, &i).is_some()); wr.verify(); } assert!(wr.len() == 0); let rdr2 = sb.pre_commit(wr, &rdr); rdr2.verify(); rdr.verify(); std::mem::drop(rdr); rdr2.verify(); std::mem::drop(rdr2); std::mem::drop(sb); assert_released(); } #[test] fn test_hashtrie_cursor_insert_broad() { let mut sb: SuperBlock = unsafe { SuperBlock::new() }; let rdr = sb.create_reader(); let mut wr = sb.create_writer(); assert!(wr.len() == 0); for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { assert!(wr.insert(i, i, i).is_none()); wr.verify(); } assert!(wr.len() == (ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) as usize); for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { assert!(wr.search(i, &i).is_some()); } for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { assert!(wr.remove(i, &i).is_some()); wr.verify(); } assert!(wr.len() == 0); let rdr2 = sb.pre_commit(wr, &rdr); rdr2.verify(); rdr.verify(); std::mem::drop(rdr); rdr2.verify(); std::mem::drop(rdr2); std::mem::drop(sb); assert_released(); } #[test] fn test_hashtrie_cursor_insert_multiple_txns() { let mut sb: SuperBlock = unsafe { SuperBlock::new() }; let mut rdr = sb.create_reader(); // Do thing assert!(rdr.len() == 0); for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { let mut wr = sb.create_writer(); assert!(wr.insert(i, i, i).is_none()); wr.verify(); rdr = sb.pre_commit(wr, &rdr); } { let rdr2 = sb.create_reader(); assert!(rdr2.len() == (ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) as usize); for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { assert!(rdr2.search(i, &i).is_some()); } } for i in 0..(ABS_MAX_HEIGHT * ABS_MAX_HEIGHT) { let mut wr = sb.create_writer(); assert!(wr.remove(i, &i).is_some()); wr.verify(); rdr = sb.pre_commit(wr, &rdr); } assert!(rdr.len() == 0); rdr.verify(); std::mem::drop(rdr); std::mem::drop(sb); assert_released(); } } concread-0.4.6/src/internals/hashtrie/iter.rs000064400000000000000000000103541046102023000173220ustar 00000000000000//! Iterators for the hashtrie use super::cursor::{Ptr, HT_CAPACITY, MAX_HEIGHT}; use std::collections::VecDeque; use std::fmt::Debug; use std::hash::Hash; use std::marker::PhantomData; /// Iterator over references to Key Value pairs stored in the map. pub struct Iter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { length: usize, stack: VecDeque<(usize, Ptr)>, k: PhantomData<&'a K>, v: PhantomData<&'a V>, } impl Iter<'_, K, V> { pub(crate) fn new(root: Ptr, length: usize) -> Self { let mut stack = VecDeque::with_capacity(MAX_HEIGHT as usize); stack.push_back((0, root)); Iter { length, stack, k: PhantomData, v: PhantomData, } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for Iter<'a, K, V> { type Item = (&'a K, &'a V); /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { if self.stack.is_empty() { return None; } 'outer: loop { if let Some((mut idx, tgt_ptr)) = self.stack.pop_back() { if tgt_ptr.is_bucket() { // Look next at idx. let v = &(tgt_ptr.as_bucket::()[idx].v) as *const V; let k = &(tgt_ptr.as_bucket::()[idx].k) as *const K; // Inc idx idx += 1; // push back if there is more to examine if idx < tgt_ptr.as_bucket::().len() { // Still more to go self.stack.push_back((idx, tgt_ptr)); } return Some(unsafe { (&*k as &K, &*v as &V) }); } else { debug_assert!(tgt_ptr.is_branch()); let brch = tgt_ptr.as_branch::(); while idx < HT_CAPACITY { let interest_ptr = brch.nodes[idx]; idx += 1; if !interest_ptr.is_null() { // Push our current loc // to the stack, and our ptr, // as well as the new one. self.stack.push_back((idx, tgt_ptr)); self.stack.push_back((0, interest_ptr)); continue 'outer; } } } } else { // stack is depleted! return None; } } } /// Provide a hint as to the number of items this iterator will yield. fn size_hint(&self) -> (usize, Option) { (self.length, Some(self.length)) } } /// Iterater over references to Keys stored in the map. pub struct KeyIter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { iter: Iter<'a, K, V>, } impl KeyIter<'_, K, V> { pub(crate) fn new(root: Ptr, length: usize) -> Self { KeyIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for KeyIter<'a, K, V> { type Item = &'a K; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(k, _)| k) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } /// Iterater over references to Values stored in the map. pub struct ValueIter<'a, K, V> where K: Hash + Eq + Clone + Debug, V: Clone, { iter: Iter<'a, K, V>, } impl ValueIter<'_, K, V> { pub(crate) fn new(root: Ptr, length: usize) -> Self { ValueIter { iter: Iter::new(root, length), } } } impl<'a, K: Clone + Hash + Eq + Debug, V: Clone> Iterator for ValueIter<'a, K, V> { type Item = &'a V; /// Yield the next key value reference, or `None` if exhausted. fn next(&mut self) -> Option { self.iter.next().map(|(_, v)| v) } fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } concread-0.4.6/src/internals/hashtrie/mod.rs000064400000000000000000000011311046102023000171270ustar 00000000000000//! HashTrie - A concurrently readable HashTrie //! //! This is similar to `HashMap`, however is based on a suffix trie which //! is append only / additive only. As a result, the more you add, the more //! space this will take up. The only way to "remove" an item would be to swap //! it's value with a "None". The trie won't shrink, but node size requirements //! are low. For a trie with 4,294,967,295 items, only ~40MB is required. For //! 1,000,000 items only ~12KB is required. //! //! If in doubt, you should use `HashMap` instead 🥰 // mod states; // mod node; pub mod cursor; pub mod iter; concread-0.4.6/src/internals/lincowcell/mod.rs000064400000000000000000000424331046102023000174650ustar 00000000000000//! A CowCell with linear drop behaviour //! //! YOU SHOULD NOT USE THIS TYPE! Normaly concurrent cells do NOT require the linear dropping //! behaviour that this implements, and it will only make your application //! worse for it. Consider `CowCell` and `EbrCell` instead. /* * The reason this exists is for protecting the major concurrently readable structures * that can corrupt if intermediate transactions are removed early. Effectively what we * need to create is: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ ^ * read read read * * This way if we drop the reader on B: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ * read read * * Notice that A is not dropped. It's only when A is dropped: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ * read * * [ X ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ * read * [ X ] -> [ X ] -> [ C ] -> [ Write Head ] * ^ * read * * [ C ] -> [ Write Head ] * ^ * read * * At this point we drop A and B. To achieve this we need to consider that: * - If WriteHead is dropped, C continues to live. * - If A/B are dropped, we don't affect C. * - Everything is dropped in order until a read txn exists. * - When we drop the main structure, no readers can exist. * - A writer must be able to commit to a stable location. * * * T T T * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ ^ * RRR RR R * * * As the write head proceeds, it must be able to interact with past versions to commit * garbage that is "last seen" in the formers generation. * */ use std::marker::PhantomData; use std::ops::Deref; use std::ops::DerefMut; use std::sync::Arc; use std::sync::{Mutex, MutexGuard}; /// Do not implement this. You don't need this negativity in your life. pub trait LinCowCellCapable { /// Create the first reader snapshot for a new instance. fn create_reader(&self) -> R; /// Create a writer that may be rolled back. fn create_writer(&self) -> U; /// Given the current active reader, and the writer to commit, update our /// main structure as mut self, and our previously linear generations based on /// what was updated. fn pre_commit(&mut self, new: U, prev: &R) -> R; } #[derive(Debug)] /// A concurrently readable cell with linearised drop behaviour. pub struct LinCowCell { updater: PhantomData, write: Mutex, active: Mutex>>, } #[derive(Debug)] /// A write txn over a linear cell. pub struct LinCowCellWriteTxn<'a, T, R, U> { // This way we know who to contact for updating our data .... caller: &'a LinCowCell, guard: MutexGuard<'a, T>, work: U, } #[derive(Debug)] struct LinCowCellInner { // This gives the chain effect. pin: Mutex>>>, data: R, } #[derive(Debug)] /// A read txn over a linear cell. pub struct LinCowCellReadTxn<'a, T, R, U> { // We must outlive the root _caller: &'a LinCowCell, // We pin the current version. work: Arc>, } impl LinCowCellInner { pub fn new(data: R) -> Self { LinCowCellInner { pin: Mutex::new(None), data, } } } impl LinCowCell where T: LinCowCellCapable, { /// Create a new linear 🐄 cell. pub fn new(data: T) -> Self { let r = data.create_reader(); LinCowCell { updater: PhantomData, write: Mutex::new(data), active: Mutex::new(Arc::new(LinCowCellInner::new(r))), } } /// Begin a read txn pub fn read(&self) -> LinCowCellReadTxn { let rwguard = self.active.lock().unwrap(); LinCowCellReadTxn { _caller: self, // inc the arc. work: rwguard.clone(), } } /// Begin a write txn pub fn write(&self) -> LinCowCellWriteTxn { /* Take the exclusive write lock first */ let write_guard = self.write.lock().unwrap(); /* Now take a ro-txn to get the data copied */ // let active_guard = self.active.lock(); /* This copies the data */ let work: U = (*write_guard).create_writer(); /* Now build the write struct */ LinCowCellWriteTxn { caller: self, guard: write_guard, work, } } /// Attempt a write txn pub fn try_write(&self) -> Option> { self.write.try_lock().ok().map(|write_guard| { /* This copies the data */ let work: U = (*write_guard).create_writer(); /* Now build the write struct */ LinCowCellWriteTxn { caller: self, guard: write_guard, work, } }) } fn commit(&self, write: LinCowCellWriteTxn) { // Destructure our writer. let LinCowCellWriteTxn { // This is self. caller: _caller, mut guard, work, } = write; // Get the previous generation. let mut rwguard = self.active.lock().unwrap(); // Start to setup for the commit. let newdata = guard.pre_commit(work, &rwguard.data); let new_inner = Arc::new(LinCowCellInner::new(newdata)); { // This modifies the next pointer of the existing read txns let mut rwguard_inner = rwguard.pin.lock().unwrap(); // Create the arc pointer to our new data // add it to the last value *rwguard_inner = Some(new_inner.clone()); } // now over-write the last value in the mutex. *rwguard = new_inner; } } impl Deref for LinCowCellReadTxn<'_, T, R, U> { type Target = R; #[inline] fn deref(&self) -> &R { &self.work.data } } impl AsRef for LinCowCellReadTxn<'_, T, R, U> { #[inline] fn as_ref(&self) -> &R { &self.work.data } } impl LinCowCellWriteTxn<'_, T, R, U> where T: LinCowCellCapable, { #[inline] /// Get the mutable inner of this type pub fn get_mut(&mut self) -> &mut U { &mut self.work } /// Commit the active changes. pub fn commit(self) { /* Write our data back to the LinCowCell */ self.caller.commit(self); } } impl Deref for LinCowCellWriteTxn<'_, T, R, U> { type Target = U; #[inline] fn deref(&self) -> &U { &self.work } } impl DerefMut for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn deref_mut(&mut self) -> &mut U { &mut self.work } } impl AsRef for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn as_ref(&self) -> &U { &self.work } } impl AsMut for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn as_mut(&mut self) -> &mut U { &mut self.work } } #[cfg(test)] mod tests { use super::LinCowCell; use super::LinCowCellCapable; use std::sync::atomic::{AtomicUsize, Ordering}; use std::thread::scope; #[derive(Debug)] struct TestData { x: i64, } #[derive(Debug)] struct TestDataReadTxn { x: i64, } #[derive(Debug)] struct TestDataWriteTxn { x: i64, } impl LinCowCellCapable for TestData { fn create_reader(&self) -> TestDataReadTxn { TestDataReadTxn { x: self.x } } fn create_writer(&self) -> TestDataWriteTxn { TestDataWriteTxn { x: self.x } } fn pre_commit( &mut self, new: TestDataWriteTxn, _prev: &TestDataReadTxn, ) -> TestDataReadTxn { // Update self if needed. self.x = new.x; // return a new reader. TestDataReadTxn { x: new.x } } } #[test] fn test_simple_create() { let data = TestData { x: 0 }; let cc = LinCowCell::new(data); let cc_rotxn_a = cc.read(); println!("cc_rotxn_a -> {:?}", cc_rotxn_a); assert_eq!(cc_rotxn_a.work.data.x, 0); { /* Take a write txn */ let mut cc_wrtxn = cc.write(); println!("cc_wrtxn -> {:?}", cc_wrtxn); assert_eq!(cc_wrtxn.work.x, 0); assert_eq!(cc_wrtxn.as_ref().x, 0); { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(mut_ptr.x, 0); mut_ptr.x = 1; assert_eq!(mut_ptr.x, 1); } // Check we haven't mutated the old data. assert_eq!(cc_rotxn_a.work.data.x, 0); } // The writer is dropped here. Assert no changes. assert_eq!(cc_rotxn_a.work.data.x, 0); { /* Take a new write txn */ let mut cc_wrtxn = cc.write(); println!("cc_wrtxn -> {:?}", cc_wrtxn); assert_eq!(cc_wrtxn.work.x, 0); assert_eq!(cc_wrtxn.as_ref().x, 0); { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(mut_ptr.x, 0); mut_ptr.x = 2; assert_eq!(mut_ptr.x, 2); } // Check we haven't mutated the old data. assert_eq!(cc_rotxn_a.work.data.x, 0); // Now commit cc_wrtxn.commit(); } // Should not be percieved by the old txn. assert_eq!(cc_rotxn_a.work.data.x, 0); let cc_rotxn_c = cc.read(); // Is visible to the new one though. assert_eq!(cc_rotxn_c.work.data.x, 2); } // == mt tests == fn mt_writer(cc: &LinCowCell) { let mut last_value: i64 = 0; while last_value < 500 { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); assert!(mut_ptr.x >= last_value); last_value = mut_ptr.x; mut_ptr.x += 1; } cc_wrtxn.commit(); } } fn rt_writer(cc: &LinCowCell) { let mut last_value: i64 = 0; while last_value < 500 { let cc_rotxn = cc.read(); { assert!(cc_rotxn.work.data.x >= last_value); last_value = cc_rotxn.work.data.x; } } } #[test] #[cfg_attr(miri, ignore)] fn test_multithread_create() { let start = time::Instant::now(); // Create the new cowcell. let data = TestData { x: 0 }; let cc = LinCowCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let readers: Vec<_> = (0..7) .map(|_| { scope.spawn(move || { rt_writer(cc_ref); }) }) .collect(); let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { mt_writer(cc_ref); }) }) .collect(); for h in readers.into_iter() { h.join().unwrap(); } for h in writers.into_iter() { h.join().unwrap(); } true })); let end = time::Instant::now(); print!("Arc MT create :{:?} ", end - start); } static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } #[derive(Debug)] struct TestGcWrapperReadTxn { _data: T, } #[derive(Debug)] struct TestGcWrapperWriteTxn { data: T, } impl LinCowCellCapable, TestGcWrapperWriteTxn> for TestGcWrapper { fn create_reader(&self) -> TestGcWrapperReadTxn { TestGcWrapperReadTxn { _data: self.data.clone(), } } fn create_writer(&self) -> TestGcWrapperWriteTxn { TestGcWrapperWriteTxn { data: self.data.clone(), } } fn pre_commit( &mut self, new: TestGcWrapperWriteTxn, _prev: &TestGcWrapperReadTxn, ) -> TestGcWrapperReadTxn { // Update self if needed. self.data = new.data.clone(); // return a new reader. TestGcWrapperReadTxn { _data: self.data.clone(), } } } impl Drop for TestGcWrapperReadTxn { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } fn test_gc_operation_thread( cc: &LinCowCell, TestGcWrapperReadTxn, TestGcWrapperWriteTxn>, ) { while GC_COUNT.load(Ordering::Acquire) < 50 { // thread::sleep(std::time::Duration::from_millis(200)); { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } } } #[test] #[cfg_attr(miri, ignore)] fn test_gc_operation() { GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = LinCowCell::new(data); assert!(scope(|scope| { let cc_ref = &cc; let writers: Vec<_> = (0..3) .map(|_| { scope.spawn(move || { test_gc_operation_thread(cc_ref); }) }) .collect(); for h in writers.into_iter() { h.join().unwrap(); } true })); assert!(GC_COUNT.load(Ordering::Acquire) >= 50); } } #[cfg(test)] mod tests_linear { use super::LinCowCell; use super::LinCowCellCapable; use std::sync::atomic::{AtomicUsize, Ordering}; static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } #[derive(Debug)] struct TestGcWrapperReadTxn { _data: T, } #[derive(Debug)] struct TestGcWrapperWriteTxn { data: T, } impl LinCowCellCapable, TestGcWrapperWriteTxn> for TestGcWrapper { fn create_reader(&self) -> TestGcWrapperReadTxn { TestGcWrapperReadTxn { _data: self.data.clone(), } } fn create_writer(&self) -> TestGcWrapperWriteTxn { TestGcWrapperWriteTxn { data: self.data.clone(), } } fn pre_commit( &mut self, new: TestGcWrapperWriteTxn, _prev: &TestGcWrapperReadTxn, ) -> TestGcWrapperReadTxn { // Update self if needed. self.data = new.data.clone(); // return a new reader. TestGcWrapperReadTxn { _data: self.data.clone(), } } } impl Drop for TestGcWrapperReadTxn { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } /* * This tests an important property of the lincowcell over the cow cell * that read txns are dropped *in order*. */ #[test] fn test_gc_operation_linear() { GC_COUNT.store(0, Ordering::Release); assert!(GC_COUNT.load(Ordering::Acquire) == 0); let data = TestGcWrapper { data: 0 }; let cc = LinCowCell::new(data); // Open a read A. let cc_rotxn_a = cc.read(); let cc_rotxn_a_2 = cc.read(); // open a write, change and commit { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } // open a read B. let cc_rotxn_b = cc.read(); // open a write, change and commit { let mut cc_wrtxn = cc.write(); { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit(); } // open a read C let cc_rotxn_c = cc.read(); assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop B drop(cc_rotxn_b); // gc count should be 0. assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop C drop(cc_rotxn_c); // gc count should be 0 assert!(GC_COUNT.load(Ordering::Acquire) == 0); // Drop the second A, should not trigger yet. drop(cc_rotxn_a_2); assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop A drop(cc_rotxn_a); // gc count should be 2 (A + B, C is still live) assert!(GC_COUNT.load(Ordering::Acquire) == 2); } } concread-0.4.6/src/internals/lincowcell_async/mod.rs000064400000000000000000000414111046102023000206550ustar 00000000000000//! A CowCell with linear drop behaviour, and async locking. //! //! YOU SHOULD NOT USE THIS TYPE! Normaly concurrent cells do NOT require the linear dropping //! behaviour that this implements, and it will only make your application //! worse for it. Consider `CowCell` and `EbrCell` instead. /* * The reason this exists is for protecting the major concurrently readable structures * that can corrupt if intermediate transactions are removed early. Effectively what we * need to create is: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ ^ * read read read * * This way if we drop the reader on B: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ * read read * * Notice that A is not dropped. It's only when A is dropped: * * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ * read * * [ X ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ * read * [ X ] -> [ X ] -> [ C ] -> [ Write Head ] * ^ * read * * [ C ] -> [ Write Head ] * ^ * read * * At this point we drop A and B. To achieve this we need to consider that: * - If WriteHead is dropped, C continues to live. * - If A/B are dropped, we don't affect C. * - Everything is dropped in order until a read txn exists. * - When we drop the main structure, no readers can exist. * - A writer must be able to commit to a stable location. * * * T T T * [ A ] -> [ B ] -> [ C ] -> [ Write Head ] * ^ ^ ^ * RRR RR R * * * As the write head proceeds, it must be able to interact with past versions to commit * garbage that is "last seen" in the formers generation. * */ use std::marker::PhantomData; use std::ops::Deref; use std::ops::DerefMut; use std::sync::Arc; use std::sync::Mutex as SyncMutex; use tokio::sync::{Mutex, MutexGuard}; use crate::internals::lincowcell::LinCowCellCapable; #[derive(Debug)] /// A concurrently readable cell with linearised drop behaviour. pub struct LinCowCell { updater: PhantomData, write: Mutex, active: Mutex>>, } #[derive(Debug)] /// A write txn over a linear cell. pub struct LinCowCellWriteTxn<'a, T, R, U> { // This way we know who to contact for updating our data .... caller: &'a LinCowCell, guard: MutexGuard<'a, T>, work: U, } #[derive(Debug)] struct LinCowCellInner { // This gives the chain effect. pin: SyncMutex>>>, data: R, } #[derive(Debug)] /// A read txn over a linear cell. pub struct LinCowCellReadTxn<'a, T, R, U> { // We must outlive the root _caller: &'a LinCowCell, // We pin the current version. work: Arc>, } impl LinCowCellInner { pub fn new(data: R) -> Self { LinCowCellInner { pin: SyncMutex::new(None), data, } } } impl LinCowCell where T: LinCowCellCapable, { /// Create a new linear 🐄 cell. pub fn new(data: T) -> Self { let r = data.create_reader(); LinCowCell { updater: PhantomData, write: Mutex::new(data), active: Mutex::new(Arc::new(LinCowCellInner::new(r))), } } /// Begin a read txn pub async fn read(&self) -> LinCowCellReadTxn { let rwguard = self.active.lock().await; LinCowCellReadTxn { _caller: self, // inc the arc. work: rwguard.clone(), } } /// Begin a write txn pub async fn write<'x>(&'x self) -> LinCowCellWriteTxn<'x, T, R, U> { /* Take the exclusive write lock first */ let write_guard = self.write.lock().await; /* Now take a ro-txn to get the data copied */ // let active_guard = self.active.lock(); /* This copies the data */ let work: U = (*write_guard).create_writer(); /* Now build the write struct */ LinCowCellWriteTxn { caller: self, guard: write_guard, work, } } /// Attempt a write txn pub fn try_write(&self) -> Option> { self.write .try_lock() .map(|write_guard| { /* This copies the data */ let work: U = (*write_guard).create_writer(); /* Now build the write struct */ LinCowCellWriteTxn { caller: self, guard: write_guard, work, } }) .ok() } async fn commit(&self, write: LinCowCellWriteTxn<'_, T, R, U>) { // Destructure our writer. let LinCowCellWriteTxn { // This is self. caller: _caller, mut guard, work, } = write; // Get the previous generation. let mut rwguard = self.active.lock().await; // Start to setup for the commit. let newdata = guard.pre_commit(work, &rwguard.data); let new_inner = Arc::new(LinCowCellInner::new(newdata)); { // This modifies the next pointer of the existing read txns let mut rwguard_inner = rwguard.pin.lock().unwrap(); // Create the arc pointer to our new data // add it to the last value *rwguard_inner = Some(new_inner.clone()); } // now over-write the last value in the mutex. *rwguard = new_inner; } } impl Deref for LinCowCellReadTxn<'_, T, R, U> { type Target = R; #[inline] fn deref(&self) -> &R { &self.work.data } } impl AsRef for LinCowCellReadTxn<'_, T, R, U> { #[inline] fn as_ref(&self) -> &R { &self.work.data } } impl LinCowCellWriteTxn<'_, T, R, U> where T: LinCowCellCapable, { #[inline] /// Get the mutable inner of this type pub fn get_mut(&mut self) -> &mut U { &mut self.work } /// Commit the active changes. pub async fn commit(self) { /* Write our data back to the LinCowCell */ self.caller.commit(self).await; } } impl Deref for LinCowCellWriteTxn<'_, T, R, U> { type Target = U; #[inline] fn deref(&self) -> &U { &self.work } } impl DerefMut for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn deref_mut(&mut self) -> &mut U { &mut self.work } } impl AsRef for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn as_ref(&self) -> &U { &self.work } } impl AsMut for LinCowCellWriteTxn<'_, T, R, U> { #[inline] fn as_mut(&mut self) -> &mut U { &mut self.work } } #[cfg(test)] mod tests { use super::LinCowCell; use super::LinCowCellCapable; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; #[derive(Debug)] struct TestData { x: i64, } #[derive(Debug)] struct TestDataReadTxn { x: i64, } #[derive(Debug)] struct TestDataWriteTxn { x: i64, } impl LinCowCellCapable for TestData { fn create_reader(&self) -> TestDataReadTxn { TestDataReadTxn { x: self.x } } fn create_writer(&self) -> TestDataWriteTxn { TestDataWriteTxn { x: self.x } } fn pre_commit( &mut self, new: TestDataWriteTxn, _prev: &TestDataReadTxn, ) -> TestDataReadTxn { // Update self if needed. self.x = new.x; // return a new reader. TestDataReadTxn { x: new.x } } } #[tokio::test] async fn test_simple_create() { let data = TestData { x: 0 }; let cc = LinCowCell::new(data); let cc_rotxn_a = cc.read().await; println!("cc_rotxn_a -> {:?}", cc_rotxn_a); assert_eq!(cc_rotxn_a.work.data.x, 0); { /* Take a write txn */ let mut cc_wrtxn = cc.write().await; println!("cc_wrtxn -> {:?}", cc_wrtxn); assert_eq!(cc_wrtxn.work.x, 0); assert_eq!(cc_wrtxn.as_ref().x, 0); { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(mut_ptr.x, 0); mut_ptr.x = 1; assert_eq!(mut_ptr.x, 1); } // Check we haven't mutated the old data. assert_eq!(cc_rotxn_a.work.data.x, 0); } // The writer is dropped here. Assert no changes. assert_eq!(cc_rotxn_a.work.data.x, 0); { /* Take a new write txn */ let mut cc_wrtxn = cc.write().await; println!("cc_wrtxn -> {:?}", cc_wrtxn); assert_eq!(cc_wrtxn.work.x, 0); assert_eq!(cc_wrtxn.as_ref().x, 0); { let mut_ptr = cc_wrtxn.get_mut(); /* Assert it's 0 */ assert_eq!(mut_ptr.x, 0); mut_ptr.x = 2; assert_eq!(mut_ptr.x, 2); } // Check we haven't mutated the old data. assert_eq!(cc_rotxn_a.work.data.x, 0); // Now commit cc_wrtxn.commit().await; } // Should not be percieved by the old txn. assert_eq!(cc_rotxn_a.work.data.x, 0); let cc_rotxn_c = cc.read().await; // Is visible to the new one though. assert_eq!(cc_rotxn_c.work.data.x, 2); } // == mt tests == async fn mt_writer(cc: Arc>) { let mut last_value: i64 = 0; while last_value < 500 { let mut cc_wrtxn = cc.write().await; { let mut_ptr = cc_wrtxn.get_mut(); assert!(mut_ptr.x >= last_value); last_value = mut_ptr.x; mut_ptr.x += 1; } cc_wrtxn.commit().await; } } async fn rt_writer(cc: Arc>) { let mut last_value: i64 = 0; while last_value < 500 { let cc_rotxn = cc.read().await; { assert!(cc_rotxn.work.data.x >= last_value); last_value = cc_rotxn.work.data.x; } } } #[tokio::test] #[cfg_attr(miri, ignore)] async fn test_concurrent_create() { let start = time::Instant::now(); // Create the new cowcell. let data = TestData { x: 0 }; let cc = Arc::new(LinCowCell::new(data)); let _ = tokio::join!( tokio::task::spawn(rt_writer(cc.clone())), tokio::task::spawn(rt_writer(cc.clone())), tokio::task::spawn(rt_writer(cc.clone())), tokio::task::spawn(rt_writer(cc.clone())), tokio::task::spawn(mt_writer(cc.clone())), tokio::task::spawn(mt_writer(cc.clone())), ); let end = time::Instant::now(); print!("Arc MT create :{:?} ", end - start); } static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } #[derive(Debug)] struct TestGcWrapperReadTxn { _data: T, } #[derive(Debug)] struct TestGcWrapperWriteTxn { data: T, } impl LinCowCellCapable, TestGcWrapperWriteTxn> for TestGcWrapper { fn create_reader(&self) -> TestGcWrapperReadTxn { TestGcWrapperReadTxn { _data: self.data.clone(), } } fn create_writer(&self) -> TestGcWrapperWriteTxn { TestGcWrapperWriteTxn { data: self.data.clone(), } } fn pre_commit( &mut self, new: TestGcWrapperWriteTxn, _prev: &TestGcWrapperReadTxn, ) -> TestGcWrapperReadTxn { // Update self if needed. self.data = new.data.clone(); // return a new reader. TestGcWrapperReadTxn { _data: self.data.clone(), } } } impl Drop for TestGcWrapperReadTxn { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } async fn test_gc_operation_thread( cc: Arc< LinCowCell, TestGcWrapperReadTxn, TestGcWrapperWriteTxn>, >, ) { while GC_COUNT.load(Ordering::Acquire) < 50 { // thread::sleep(std::time::Duration::from_millis(200)); { let mut cc_wrtxn = cc.write().await; { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit().await; } } } #[tokio::test] #[cfg_attr(miri, ignore)] async fn test_gc_operation() { GC_COUNT.store(0, Ordering::Release); let data = TestGcWrapper { data: 0 }; let cc = Arc::new(LinCowCell::new(data)); let _ = tokio::join!( tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), tokio::task::spawn(test_gc_operation_thread(cc.clone())), ); assert!(GC_COUNT.load(Ordering::Acquire) >= 50); } } #[cfg(test)] mod tests_linear { use super::LinCowCell; use super::LinCowCellCapable; use std::sync::atomic::{AtomicUsize, Ordering}; static GC_COUNT: AtomicUsize = AtomicUsize::new(0); #[derive(Debug, Clone)] struct TestGcWrapper { data: T, } #[derive(Debug)] struct TestGcWrapperReadTxn { _data: T, } #[derive(Debug)] struct TestGcWrapperWriteTxn { data: T, } impl LinCowCellCapable, TestGcWrapperWriteTxn> for TestGcWrapper { fn create_reader(&self) -> TestGcWrapperReadTxn { TestGcWrapperReadTxn { _data: self.data.clone(), } } fn create_writer(&self) -> TestGcWrapperWriteTxn { TestGcWrapperWriteTxn { data: self.data.clone(), } } fn pre_commit( &mut self, new: TestGcWrapperWriteTxn, _prev: &TestGcWrapperReadTxn, ) -> TestGcWrapperReadTxn { // Update self if needed. self.data = new.data.clone(); // return a new reader. TestGcWrapperReadTxn { _data: self.data.clone(), } } } impl Drop for TestGcWrapperReadTxn { fn drop(&mut self) { // Add to the atomic counter ... GC_COUNT.fetch_add(1, Ordering::Release); } } /* * This tests an important property of the lincowcell over the cow cell * that read txns are dropped *in order*. */ #[tokio::test] async fn test_gc_operation_linear() { GC_COUNT.store(0, Ordering::Release); assert!(GC_COUNT.load(Ordering::Acquire) == 0); let data = TestGcWrapper { data: 0 }; let cc = LinCowCell::new(data); // Open a read A. let cc_rotxn_a = cc.read().await; let cc_rotxn_a_2 = cc.read().await; // open a write, change and commit { let mut cc_wrtxn = cc.write().await; { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit().await; } // open a read B. let cc_rotxn_b = cc.read().await; // open a write, change and commit { let mut cc_wrtxn = cc.write().await; { let mut_ptr = cc_wrtxn.get_mut(); mut_ptr.data += 1; } cc_wrtxn.commit().await; } // open a read C let cc_rotxn_c = cc.read().await; assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop B drop(cc_rotxn_b); // gc count should be 0. assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop C drop(cc_rotxn_c); // gc count should be 0 assert!(GC_COUNT.load(Ordering::Acquire) == 0); // Drop the second A, should not trigger yet. drop(cc_rotxn_a_2); assert!(GC_COUNT.load(Ordering::Acquire) == 0); // drop A drop(cc_rotxn_a); // gc count should be 2 (A + B, C is still live) assert!(GC_COUNT.load(Ordering::Acquire) == 2); } } concread-0.4.6/src/internals/mod.rs000064400000000000000000000012361046102023000153260ustar 00000000000000//! This module contains all the internals of how the complex concurrent datastructures //! are implemented. You should turn back now. Nothing of value is here. This module can //! only inflict horror upon you. //! //! This module exists for one purpose - to allow external composition of structures //! coordinated by a single locking manager. This makes every element of this module //! unsafe in every meaning of the word. If you handle this module at all, you will //! probably cause space time to unravel. //! //! ⚠️ ⚠️ ⚠️ pub mod bptree; pub mod hashmap; pub mod hashtrie; pub mod lincowcell; #[cfg(feature = "asynch")] pub mod lincowcell_async; concread-0.4.6/src/lc_tests.rs000064400000000000000000000034451046102023000143740ustar 00000000000000use crate::internals::bptree::cursor::{CursorRead, CursorWrite, SuperBlock}; use crate::internals::lincowcell::{LinCowCell, LinCowCellCapable}; struct TestStruct { bptree_map_a: SuperBlock, bptree_map_b: SuperBlock, } struct TestStructRead { bptree_map_a: CursorRead, bptree_map_b: CursorRead, } struct TestStructWrite { bptree_map_a: CursorWrite, bptree_map_b: CursorWrite, } impl LinCowCellCapable for TestStruct { fn create_reader(&self) -> TestStructRead { // This sets up the first reader. TestStructRead { bptree_map_a: self.bptree_map_a.create_reader(), bptree_map_b: self.bptree_map_b.create_reader(), } } fn create_writer(&self) -> TestStructWrite { // This sets up the first writer. TestStructWrite { bptree_map_a: self.bptree_map_a.create_writer(), bptree_map_b: self.bptree_map_b.create_writer(), } } fn pre_commit(&mut self, new: TestStructWrite, prev: &TestStructRead) -> TestStructRead { let TestStructWrite { bptree_map_a, bptree_map_b, } = new; let bptree_map_a = self .bptree_map_a .pre_commit(bptree_map_a, &prev.bptree_map_a); let bptree_map_b = self .bptree_map_b .pre_commit(bptree_map_b, &prev.bptree_map_b); TestStructRead { bptree_map_a, bptree_map_b, } } } #[test] fn test_lc_basic() { let lcc = LinCowCell::new(TestStruct { bptree_map_a: unsafe { SuperBlock::new() }, bptree_map_b: unsafe { SuperBlock::new() }, }); let x = lcc.write(); x.commit(); let _r = lcc.read(); } concread-0.4.6/src/lib.rs000064400000000000000000000050321046102023000133140ustar 00000000000000//! Concread - Concurrently Readable Datastructures //! //! Concurrently readable is often referred to as [Copy-On-Write](https://en.wikipedia.org/wiki/Copy-on-write), [Multi-Version-Concurrency-Control](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) //! or [Software Transactional Memory](https://en.wikipedia.org/wiki/Software_transactional_memory). //! //! These structures allow multiple readers with transactions //! to proceed while single writers can operate. A reader is guaranteed the content //! of their transaction will remain the same for the duration of the read, and readers do not block //! writers from proceeding. //! Writers are serialised, just like a mutex. //! //! You can use these in place of a RwLock, and will likely see improvements in //! parallel throughput of your application. //! //! The best use is in place of mutex/rwlock, where the reader exists for a //! non-trivial amount of time. //! //! For example, if you have a RwLock where the lock is taken, data changed or read, and dropped //! immediately, this probably won't help you. //! //! However, if you have a RwLock where you hold the read lock for any amount of time, //! writers will begin to stall - or inversely, the writer will cause readers to block //! and wait as the writer proceeds. //! //! # Features //! This library provides multiple structures for you to use. You may enable or disable these based //! on our features. //! //! * `ebr` - epoch based reclaim cell //! * `maps` - concurrently readable b+tree and hashmaps //! * `arcache` - concurrently readable ARC cache //! * `ahash` - use the cpu accelerated ahash crate //! //! By default all of these features are enabled. If you are planning to use this crate in a wasm //! context we recommend you use only `maps` as a feature. // #![deny(warnings)] #![warn(unused_extern_crates)] #![warn(missing_docs)] #![allow(clippy::needless_lifetimes)] #![cfg_attr(feature = "simd_support", feature(portable_simd))] #[cfg(feature = "maps")] #[macro_use] extern crate smallvec; pub mod cowcell; pub use cowcell::CowCell; #[cfg(feature = "ebr")] pub mod ebrcell; #[cfg(feature = "ebr")] pub use ebrcell::EbrCell; #[cfg(feature = "arcache")] pub mod arcache; #[cfg(feature = "tcache")] pub mod threadcache; // This is where the scary rust lives. #[cfg(feature = "maps")] pub mod internals; // This is where the gud rust lives. #[cfg(feature = "maps")] mod utils; #[cfg(feature = "maps")] pub mod bptree; #[cfg(feature = "maps")] pub mod hashmap; #[cfg(feature = "maps")] pub mod hashtrie; #[cfg(test)] mod lc_tests; concread-0.4.6/src/threadcache/mod.rs000064400000000000000000000256701046102023000155720ustar 00000000000000//! ThreadCache - A per-thread cache with transactional behaviour. //! //! This provides a per-thread cache, which uses a broadcast invalidation //! queue to manage local content. This is similar to how a CPU cache works //! in hardware. Generally this is best for small, distinct caches with very //! few changes / writes. //! //! It's worth noting that each thread needs to frequently "read" it's cache. //! Any idle thread will end up with invalidations building up, that can consume //! a large volume of memory. This means you need your "readers" to have transactions //! opened/closed periodically to ensure that invalidations are acknowledged. //! //! Generally you should prefer to use `ARCache` over this module unless you really require //! the properties of this module. use std::collections::HashSet; use std::num::NonZeroUsize; use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::mpsc::{channel, Receiver, Sender}; use std::sync::Arc; use std::sync::{Mutex, MutexGuard}; use std::fmt::Debug; use std::hash::Hash; use lru::LruCache; struct Inner where K: Hash + Eq + Debug + Clone, { tid: usize, last_inv: Option>, cache: LruCache, } /// An instance of a threads local cache store. pub struct ThreadLocal where K: Hash + Eq + Debug + Clone, { rx: Receiver>, wrlock: Arc>>, inv_up_to_txid: Arc, inner: Mutex>, } struct Writer where K: Hash + Eq + Debug + Clone, { txs: Vec>>, } /// A write transaction over this local threads cache. If you hold the write txn, no /// other thread can be in the write state. Changes to this cache will be broadcast to /// other threads to ensure they can revalidate their content correctly. pub struct ThreadLocalWriteTxn<'a, K, V> where K: Hash + Eq + Debug + Clone, { txid: u64, // parent: &'a mut ThreadLocal, parent: MutexGuard<'a, Inner>, guard: MutexGuard<'a, Writer>, rollback: HashSet, inv_up_to_txid: Arc, } /// A read transaction of this cache. During a read, it is guaranteed that the content /// of this cache will not be updated or invalidated unless by this threads actions. pub struct ThreadLocalReadTxn<'a, K, V> where K: Hash + Eq + Debug + Clone, { // txid: u64, // parent: &'a mut ThreadLocal, parent: MutexGuard<'a, Inner>, } #[derive(Clone)] struct Invalidate where K: Hash + Eq + Debug + Clone, { k: K, txid: u64, } impl ThreadLocal where K: Hash + Eq + Debug + Clone, { /// Create a new set of caches. You must specify the number of threads caches to /// create, and the per-thread size of the cache in capacity. An array of the /// cache instances will be returned that you can then distribute to the threads. pub fn new(threads: usize, capacity: usize) -> Vec { assert!(threads > 0); let capacity = NonZeroUsize::new(capacity).unwrap(); let (txs, rxs): (Vec<_>, Vec<_>) = (0..threads).map(|_| channel::>()).unzip(); // Create an Arc> for the writer. let inv_up_to_txid = Arc::new(AtomicU64::new(0)); let wrlock = Arc::new(Mutex::new(Writer { txs })); // Then for each thread, take one rx and a clone of the broadcast tbl. // Allocate a threadid (tid). rxs.into_iter() .enumerate() .map(|(tid, rx)| ThreadLocal { rx, wrlock: wrlock.clone(), inv_up_to_txid: inv_up_to_txid.clone(), inner: Mutex::new(Inner { tid, last_inv: None, cache: LruCache::new(capacity), }), }) .collect() } /// Begin a read transaction of this thread local cache. In the start of this read /// invalidation requests will be acknowledged. pub fn read(&mut self) -> ThreadLocalReadTxn { let txid = self.inv_up_to_txid.load(Ordering::Acquire); let parent = self.invalidate(txid); ThreadLocalReadTxn { parent } } /// Begin a write transaction of this thread local cache. Once granted, only this /// thread may be in the write state - all other threads will either block on /// acquiring the write, or they can proceed to read. pub fn write(&mut self) -> ThreadLocalWriteTxn { // SAFETY this is safe, because while we are duplicating the mutable reference // which conflicts with the mutex, we aren't change the wrlock value so the mutex // is fine. // let parent: &mut Self = unsafe { &mut *(self as *mut _) }; // We are the only writer! let guard = self.wrlock.lock().unwrap(); let inv_up_to_txid = self.inv_up_to_txid.clone(); let txid = self.inv_up_to_txid.load(Ordering::Acquire); let txid = txid + 1; let parent = self.invalidate(txid); ThreadLocalWriteTxn { txid, parent, guard, rollback: HashSet::new(), inv_up_to_txid, } } fn invalidate(&self, up_to: u64) -> MutexGuard> { let mut inner = self.inner.lock().unwrap(); if let Some(inv_txid) = inner.last_inv.as_ref().map(|inv| inv.txid) { if inv_txid > up_to { // We've already invalidated past this point. return inner; } else { let mut inv = None; std::mem::swap(&mut inv, &mut inner.last_inv); // Must be valid due to being in a SOME loop! let inv = inv.unwrap(); inner.cache.pop(&inv.k); } } // We acted on the stashed invalidation, so lets see if anything else needs work. while let Ok(inv) = self.rx.try_recv() { if inv.txid > up_to { // Stash this for next loop. inner.last_inv = Some(inv); return inner; } else { inner.cache.pop(&inv.k); } } inner } } impl ThreadLocalWriteTxn<'_, K, V> where K: Hash + Eq + Debug + Clone, { /// Attempt to retrieve a k-v pain from the cache. If it is not present, `None` is returned. pub fn get(&mut self, k: &K) -> Option<&V> { self.parent.cache.get(k) } /// Determine if the key exists in the cache. pub fn contains_key(&mut self, k: &K) -> bool { self.parent.cache.get(k).is_some() } /// Insert a new item to this cache for this transaction. pub fn insert(&mut self, k: K, v: V) -> Option { // Store the k in our rollback set. self.rollback.insert(k.clone()); self.parent.cache.put(k, v) } /// Remove an item from the cache for this transaction. IE you are deleting the k-v. pub fn remove(&mut self, k: &K) -> Option { self.rollback.insert(k.clone()); self.parent.cache.pop(k) } /// Commit the changes to this cache so they are visible to others. If you do NOT call /// commit, all changes to this cache are rolled back to prevent invalidate states. pub fn commit(mut self) { // We are commiting, so lets get ready. // First, anything that we touched in the rollback set will need // to be invalidated from other caches. It doesn't matter if we // removed or inserted, it has the same effect on them. self.guard.txs.iter().enumerate().for_each(|(i, tx)| { if i != self.parent.tid { self.rollback.iter().for_each(|k| { // Ignore channel failures. let _ = tx.send(Invalidate { k: k.clone(), txid: self.txid, }); }); } }); // Now we have issued our invalidations, we can tell people to invalidate up to this txid self.inv_up_to_txid.store(self.txid, Ordering::Release); // Ensure our rollback set is empty now to avoid the drop handler. self.rollback.clear(); // We're done! } } impl Drop for ThreadLocalWriteTxn<'_, K, V> where K: Hash + Eq + Debug + Clone, { fn drop(&mut self) { // Clear anything that's in the rollback. for k in self.rollback.iter() { self.parent.cache.pop(k); } } } impl ThreadLocalReadTxn<'_, K, V> where K: Hash + Eq + Debug + Clone, { /// Attempt to retrieve a k-v pain from the cache. If it is not present, `None` is returned. pub fn get(&mut self, k: &K) -> Option<&V> { self.parent.cache.get(k) } /// Determine if the key exists in the cache. pub fn contains_key(&mut self, k: &K) -> bool { self.parent.cache.get(k).is_some() } /// Insert a new item to this cache for this transaction. pub fn insert(&mut self, k: K, v: V) -> Option { self.parent.cache.put(k, v) } } #[cfg(test)] mod tests { use super::ThreadLocal; // Temporarily ignored due to a bug in lru. #[test] // #[cfg_attr(miri, ignore)] fn test_basic() { let mut cache: Vec> = ThreadLocal::new(2, 8); let mut cache_a = cache.pop().unwrap(); let mut cache_b = cache.pop().unwrap(); let mut wr_txn = cache_a.write(); let mut rd_txn = cache_b.read(); wr_txn.insert(1, 1); wr_txn.insert(2, 2); assert!(wr_txn.contains_key(&1)); assert!(wr_txn.contains_key(&2)); assert!(!rd_txn.contains_key(&1)); assert!(!rd_txn.contains_key(&2)); wr_txn.commit(); drop(rd_txn); let mut rd_txn = cache_b.read(); // Even in a new txn, we don't have this in our cache. assert!(!rd_txn.contains_key(&1)); assert!(!rd_txn.contains_key(&2)); // But we can insert it to match rd_txn.insert(1, 1); rd_txn.insert(2, 2); drop(rd_txn); // Repeat use of rd should still show it. let mut rd_txn = cache_b.read(); assert!(rd_txn.contains_key(&1)); assert!(rd_txn.contains_key(&2)); drop(rd_txn); // Add new items. let mut wr_txn = cache_a.write(); assert!(wr_txn.contains_key(&1)); assert!(wr_txn.contains_key(&2)); wr_txn.insert(3, 3); assert!(wr_txn.contains_key(&3)); drop(wr_txn); let mut wr_txn = cache_a.write(); assert!(wr_txn.contains_key(&1)); assert!(wr_txn.contains_key(&2)); // Should have been rolled back. assert!(!wr_txn.contains_key(&3)); // Now invalidate 1/2 wr_txn.remove(&1); wr_txn.remove(&2); wr_txn.commit(); // This sends invalidation reqs, so we should now have removed this in the other cache. let mut rd_txn = cache_b.read(); // Even in a new txn, we don't have this in our cache. assert!(!rd_txn.contains_key(&1)); assert!(!rd_txn.contains_key(&2)); } } concread-0.4.6/src/unsound.rs000064400000000000000000000022401046102023000142370ustar 00000000000000extern crate concread; use concread::cowcell::CowCell; use std::ops::Deref; // use crossbeam_epoch::*; use std::mem::forget; use std::rc::Rc; struct StrRef<'a> { r: &'a str, } impl Drop for StrRef<'_> { fn drop(&mut self) { println!("{}", self.r); } } impl Clone for StrRef<'_> { fn clone(&self) -> StrRef { StrRef { r: self.r } } } struct ChangesItselfString { pub s: Rc, } impl Drop for ChangesItselfString { fn drop(&mut self) { Rc::get_mut(&mut self.s) .unwrap() .as_mut_str() .make_ascii_uppercase(); // Keep object alive. forget(self.s.clone()); } } fn main() { { let s = ChangesItselfString { s: Rc::new(String::from("lowercase_string")), }; let f = StrRef { r: s.s.deref() }; let cell = CowCell::new(f); drop(cell); } println!("ChangesItselfString is gone!"); // StrRef drop() references possibly freed memory! I made it view a leaked Rc so as to // not get a segfault, but in general this can be made to do all kinds of wrong things. // pin().flush(); // pin().flush(); } concread-0.4.6/src/unsound2.rs000064400000000000000000000025161046102023000143270ustar 00000000000000#![forbid(unsafe_code)] extern crate concread; #[cfg(not(feature = "unsoundness"))] fn main() { eprintln!("Recompile with --features unsoundness"); } #[cfg(feature = "unsoundness")] fn main() { use concread::arcache::ARCache; use std::rc::Rc; use std::sync::Arc; let non_sync_item = Rc::new(0); // neither `Send` nor `Sync` assert_eq!(Rc::strong_count(&non_sync_item), 1); let cache = ARCache::>::new_size(5, 5); let mut writer = cache.write(); writer.insert(0, non_sync_item); writer.commit(); let arc_parent = Arc::new(cache); let mut handles = vec![]; for _ in 0..5 { let arc_child = arc_parent.clone(); let child_handle = std::thread::spawn(move || { let reader = arc_child.read(); // new Reader of ARCache let smuggled_rc = reader.get(&0).unwrap(); for _ in 0..1000 { let _dummy_clone = Rc::clone(&smuggled_rc); // Increment `strong_count` of `Rc` // When `_dummy_clone` is dropped, `strong_count` is decremented. } }); handles.push(child_handle); } for handle in handles { handle.join().expect("failed to join child thread"); } assert_eq!(Rc::strong_count(arc_parent.read().get(&0).unwrap()), 1); } concread-0.4.6/src/unsound3.rs000064400000000000000000000030621046102023000143250ustar 00000000000000#![forbid(unsafe_code)] extern crate concread; #[cfg(not(feature = "unsoundness"))] fn main() { eprintln!("Recompile with --features unsoundness"); } #[derive(Debug, Clone, Copy)] #[cfg(feature = "unsoundness")] enum RefOrInt<'a> { Ref(&'a u64), Int(u64), } #[cfg(feature = "unsoundness")] fn main() { use concread::arcache::ARCache; use std::cell::Cell; use std::sync::Arc; static PARENT_STATIC: u64 = 1; // `Cell` is `Send` but not `Sync`. let item_not_sync = Cell::new(RefOrInt::Ref(&PARENT_STATIC)); let cache = ARCache::>::new_size(5, 5); let mut writer = cache.write(); writer.insert(0, item_not_sync); writer.commit(); let arc_parent = Arc::new(cache); let arc_child = arc_parent.clone(); std::thread::spawn(move || { let arc_child = arc_child; // new `Reader` of `ARCache` let reader = arc_child.read(); let ref_to_smuggled_cell = reader.get(&0).unwrap(); static CHILD_STATIC: u64 = 1; loop { ref_to_smuggled_cell.set(RefOrInt::Ref(&CHILD_STATIC)); ref_to_smuggled_cell.set(RefOrInt::Int(0xDEADBEEF)); } }); let reader = arc_parent.read(); let ref_to_inner_cell = reader.get(&0).unwrap(); loop { if let RefOrInt::Ref(addr) = ref_to_inner_cell.get() { if addr as *const _ as usize == 0xDEADBEEF { println!("We have bypassed enum checking"); println!("Dereferencing `addr` will now segfault : {}", *addr); } } } } concread-0.4.6/src/utils.rs000064400000000000000000000062131046102023000137100ustar 00000000000000use std::borrow::Borrow; use std::cmp::Ordering; // use std::mem::MaybeUninit; #[cfg(feature = "serde")] use std::fmt; #[cfg(feature = "serde")] use std::iter; #[cfg(feature = "serde")] use std::marker::PhantomData; use std::ptr; #[cfg(feature = "serde")] use serde::de::{Deserialize, MapAccess, Visitor}; pub(crate) unsafe fn slice_insert(slice: &mut [T], new: T, idx: usize) { ptr::copy( slice.as_ptr().add(idx), slice.as_mut_ptr().add(idx + 1), slice.len() - idx - 1, ); ptr::write(slice.get_unchecked_mut(idx), new); } // From std::collections::btree::node.rs pub(crate) unsafe fn slice_remove(slice: &mut [T], idx: usize) -> T { // setup the value to be returned, IE give ownership to ret. let ret = ptr::read(slice.get_unchecked(idx)); ptr::copy( slice.as_ptr().add(idx + 1), slice.as_mut_ptr().add(idx), slice.len() - idx - 1, ); ret } pub(crate) unsafe fn slice_merge(dst: &mut [T], start_idx: usize, src: &mut [T], count: usize) { let dst_ptr = dst.as_mut_ptr().add(start_idx); let src_ptr = src.as_ptr(); ptr::copy_nonoverlapping(src_ptr, dst_ptr, count); } pub(crate) unsafe fn slice_move( dst: &mut [T], dst_start_idx: usize, src: &mut [T], src_start_idx: usize, count: usize, ) { let dst_ptr = dst.as_mut_ptr().add(dst_start_idx); let src_ptr = src.as_ptr().add(src_start_idx); ptr::copy_nonoverlapping(src_ptr, dst_ptr, count); } /* pub(crate) unsafe fn slice_slide_and_drop( slice: &mut [MaybeUninit], idx: usize, count: usize, ) { // drop everything up to and including idx for item in slice.iter_mut().take(idx + 1) { // These are dropped here ...? ptr::drop_in_place(item.as_mut_ptr()); } // now move everything down. ptr::copy(slice.as_ptr().add(idx + 1), slice.as_mut_ptr(), count); } pub(crate) unsafe fn slice_slide(slice: &mut [T], idx: usize, count: usize) { // now move everything down. ptr::copy(slice.as_ptr().add(idx + 1), slice.as_mut_ptr(), count); } */ pub(crate) fn slice_search_linear(slice: &[K], k: &Q) -> Result where K: Borrow, Q: Ord + ?Sized, { for (idx, nk) in slice.iter().enumerate() { let r = k.cmp(nk.borrow()); match r { Ordering::Greater => {} Ordering::Equal => return Ok(idx), Ordering::Less => return Err(idx), } } Err(slice.len()) } #[cfg(feature = "serde")] pub struct MapCollector(PhantomData<(T, K, V)>); #[cfg(feature = "serde")] impl MapCollector { pub fn new() -> Self { Self(PhantomData) } } #[cfg(feature = "serde")] impl<'de, T, K, V> Visitor<'de> for MapCollector where T: FromIterator<(K, V)>, K: Deserialize<'de>, V: Deserialize<'de>, { type Value = T; fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result { formatter.write_str("a map") } fn visit_map(self, mut access: M) -> Result where M: MapAccess<'de>, { iter::from_fn(|| access.next_entry().transpose()).collect() } } concread-0.4.6/static/arc_1.png000064400000000000000000001013621046102023000143760ustar 00000000000000PNG  IHDRqjFsRGBeXIfMM*V^(ifHHq %V pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATxEI!t 5! 4A@@E"" Pz]zO;) !B !{}7wsw;vg>;73۷#! @h+*&Dq    @W, @@@@D l\P`V''2d @@@@ IsE0eӅ "!@soܡNi@@@T ܾl*Ev3k .d&dVϐI   $Jr"WÓHrTl EBݢnw}#sſ" ބ   P]i W{V4yZyIO~ٯKSR(Ch$d@@@ %{) S-Q'`3-?C]uG)VIN    @6Wg)Χd(cJy ?nE'pv_WlUaHB@@@q ӹ@˻x1jmu|-ԲǼ3T`ჼׄΓ(H   4[4{O!N[u0t,Z9M&RnG~Mr   ;?p'Oi|[e:o;#9uǴ?01:t=   @>NQAI1)ukSNJMhGw(\¯ѝ*/oG@@ȸ*"ߡ3^,6[=P `|D-p=: M=#  Y,E#^ __#9Y6 냁Išs6V˫%:d@@@ v5u o?c? pؼeWB *t_]R$@@@X[/R{6HQQ~$6d,D3~'+0E@@ȧ@[QEȃS:(SBd?T=首!  dC*#YSZex(ős8Y6Ej(~h=/{MXJuUh:,!   ;#I1~r<^X}giWmT1!_g.K/йkK   @UYusd_%휮ETSC^i#®n|B-Q;@@@ l|Vï<$%[pMyE6`;W}صkV235"  -觷MPZ.nx`rt&䂃TzݙCS@@@2(MezOwm 5E:9t.Gk*qoe3H &L8?6?R3a$;    W;V_^V:l!ٿMVRyוkwUtH_Q1   @)kBw)})BH ]$-/lvݢϣߩ&   vUpy3{sH tԦSΖ h|<ϣ-!#   -_[;|ky^` e!Pj"\axWK   B䘣;zS<8e!׬N\sX[ۆ{h496\IN    @9ї 7O;*HXMP98dyCsDZ*8]6l@!  +`?y_wlx"8H4-w8y!O镗gLgd`Y)   )T'~*HxVE/RMKZG$dۚ戃W*pvFl    @BW[W< ![Mum^?(=SzD@@@ ;qvGO9W%2t~U+lO:sL8Z&s@@@26<2RIqܪww6zfO5hy?36Ea@@@BW(- mbzNPNڍkkRCu7k'   tORyZ^Aʎ*NSBMLH%}O,V@@@j C;j6H !Q-['T]C;y;VFv>k߮Q~8,   [v*S fܹ*qg<2w;]ð ]s5s7Q~8,   KUjcfr)BQ+sC\)dzMs-pvh?7?Oiyj!  ,/`wa2Zawm%>gx+Z":q$C:vRtd@@@qemSlmB6XOa9ҖC7Iα}W;@@@gLֺoH>ʤ?5CE5na9Bu{oP@@@(`V6xwlpHxB,fJJ90dzrI*`#   g᎗M*Ve,/+6FPRVRq>RV1)    =pV8b1C9]BdR`Je\/,-B@@bn6tدy?6YQ{h8)T̺ͲY\J  D+0T5l)ݵEʮ?9|o:P)(){Xgo)H   e ͏T(l^7uܟޢ>υ%/hLm\wS5@@@^`{M)AIZ^u&ZEϻ1fvO=1M˝k2XzmEJv{@@@ R(-oS@iRSyiMshܦϹߡH   !^Z y{nX̦J*\ߑ-EMU R{2?6W @@@<wNa6 !?+ CfgD,]O&q=   *[pTV"ZoɕVUй{ggI׍SK{I0B@@:OO*V3{~ֳٞ9NZ|-7`>de$@@@ ~GW]HxXEw%:Gg&/("{r䞌V*pvFmޞs׻~@@@ M٩ d?3*M UP{+U+;@Wp?WU /gοՅk2  TI`%e/U?IuK_HOsC P`9ymA@@,`+eД0EKn @v׫u8~3S v@@@L RnS;BfC/弉mj*0\G?×47]t7~=Sk  D-p=uch@Pjo"Gl Tܬ(u%^ŽC ?Ӷ-   =TbIԴT}cR2M>gݯb/fM X]hlݮY<   \HcR-KnYVJU>.VyRb6Mʇ@?EzԺAD)@@@OMuR¯۟m~Fʇ@?3\˷j~rbh) KJß=$@@@ R:(YPA%Ӓȝ\u!<^)`?1{K>27綥7S4A>@@@\봡a-5XYfrwvT%*U?+q y?J,6`]^׿   @GkE3^~B`)ۨa˯)lb_+vm_TFxe   :@/w:?5;+~+d\p"|˴Tw/5@@j*w@/(IuĦ%.dZhG_)s^v~R/`G((:ۀ(\ @@+`?Z:68CIAB ,Ve7`9s3YoRJkRLR$Uy?   HX+7bOP[D!9OaޞDB=G\LEr_ Y  T"l{T#ޛcURnS@`o+"PBT1_aX?  >b쎽Dg*ݭCʧ*= @9kk[v70 @@@Ϫ셝MCʷu %Ւ7@ _)1M3ܽUVW+V[ٓ\%K|8sOْQZ{ssԹ,E/:G@ v/4[f`sb>u.f2G+ͪ&@%;*S uL6C 0MQȫ5Z;&Jz}*'Y=7Y-W94dd\IOsS @Y     @樬( x55Ov<ڹԳVvj=X궵4 Ԣٳe'sҾm-\uF^'ܩ9ש[59qֹTw3AH`- PRQu/Lk/ѹ50c@~HB%q׹I:w!}scwΞFʏ@ud{N3Mns<;hˏ{KW;׹]GL Ysm}]1u[_@* 0U%Hv-нOݦϹ[Ou8͹Ι=n !\8ܗGsnLVwn-Rs쫎НYwcAjۀ'ˇ h*ٞUljGXunKwkGH`- PRfu/ٿnײn?չwםLݟ?ez-;?QKr{ WùOpnu  ;? MscZ4|%@Њ#%/=L=}s} jT,ɫ`~x~q9]O?.O ֩ Y\a\7=pc݉姹3ꖆG00D)VZ'?ӿ8aO -st׿kڷ;wܽgĺWVK эy1G;e,}R]--ٯ0L#{ssh{^^εϺjeOs/n׳ @HN@X?Ёg_pL!BŋXV+Y˖k}Oދ)%Qj|>͟[w=?@PlHIq1e0Т hs (mݯk8gABԹ Ui:-%˒@\u.KfH@70),YF@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:@@@HZ?@@@ȕZ:\ 0a[`A. O@@ _mU*9E 03Nyȏ@G7ϯJwe3g@r'`mk(P|H @)@^5Q+)ܴiܻk$@@ ֱ6O}6Mdm# #Vv%%A +৊((ݻ7Xf@Ȓ@mdm$k+Y^ԟB @HHŇ++ۻ]w 2z@@, X[\|N:GM[:xǛ-NSMaҺ+RܭX !`4>L"@4}SoEyZbd$E… ȑ#ݸq㪶Ov@cIsFr$HB]3HB 뜵Mdm#k#Y[)Mem+kcY[B;Rq$XUw).!@8O16͒%K>oժr oٳݰa }woyco2ܔY6J{衇kV7sԝpsad-'ν,zNw:bܼ1:g V8R~8ZwE~ O$Hwh%ܐ3X&`,6HoT,7uNc+P̙3`߶ ;'vn*Ebߚ:[o-51bPP#c}$QZ\笭dm&k;YRdm.k{@@*;3׭uQ:ˊs+:`ۓBcZ0ށ?}6ID!P:gm)kSY*6 @@BWďV#<x6@VuePnp^>$-:gm+kcY[\$@@* ~ٍ=0e"ܡyZZ0@ $ֹp/uV#P59<>ZwHUHJUuDv ] Rs'΅;y;/7~$չ1q Px  @ګ,Q;UL{g%,e+is֦ >ֳ6H  @j*PU `j뒒(#+wR2I>2%:|iNI@Pׂi2i-}IT[ -uT;ڀ$! _A?<gڊg)HB@j&YG>M1E4Tq$%BL$H[ raisilqQ ӎPgm?kڎֆ$ @MtK8뵓IMo)(HD):m@isa=b''!@ꜵ-8^l֖6mKB@"s!Nk(:)ҒhDLe'iswv%I@Z\Z>I=is&ß`ږ    ]:UX1CgY< sQ j'`mDk+~mژ$@@ 2dE$ȖB'< sQ j/`mFk;Z21I  @jW(NWHj)t6l@Թ?6%MimK ,Vښ@@ Oe}/* (sQo@@@@@@@a9*  - b:ވ @vTR,v4+icr$^:SDH}/]3H!:gmV c@j/?t7xj %+Y5?*{e'ΕX&`Gd߻ v !@V뜵Yk0qI  A-TGxz\ʼ`l?@(Brs=7 dW&-'SYsf?KuKB@ S_;zu/~^E+@K!g$Q}Oٵ@Ysfk=/[$@H ޟ\mH{Cf?;Q F+@K!c$ZYeQdYڲg)<6MLB@nV_zm{E.*EP6ZVB:*D ?hk (^Mkm[-F&! @)O)?^"+iGd/kܑ Q쨝R碐eKkˎ QYںʟ6)H  {?r_a[i/uUB:*D }Բ5֐B /uڼۈǪ$@-0T.^3+ Ef% P*7d }o- Ϸ\I@ꜵ-CpB yIZwUaO1OSe `Y)R2^vi !@}eԾCB <9k [+Z 5ZǞ/*Z+6Q*H0=5L{]3B&.:gcwZ0;B oulmeY$@@Oc}Yg5mMR `zfV Lj.@snyQ YL{ϮE$cLB@"#LT `z^Gy՝LZ(w|A] 9GﵠԾDB \@VӠaL Pp[0]#PZ0#}vM"!u.*Y @Nک+QI-U0hrvP:Wl@Թ:a-DB J\66խNB@ UjL`5>gl5sD&@ I42pv@@PׂiaξϮI$Ε~H/%k[$@r)`֦I dRV9z͞aUABj-F:??ԹcJ@mnmtkSH SEy]|ᅧ !$F |JLHfs= LX[v_ek[۟ @vQ_R_pvb- H@ PwN)Y-n @T 6[Ez}b`JEf@ oA<7~s@-omzk׭;(H $^a,Xrbė "_0A j2F}<TC/H $V~W|i굍3NU֑Z:W\_˂e+@H@j"Ҳ#e5_EֲC \Xh0m^x u-|oiߣ}5@Թuo}siS#X_ @bvWΞSܥ:]Џ)/x?LB \UßӖw"Pu<'+ ]HD!@+_'xVƖ P@r6b@Rmj# $K`ss4zFLV dUYʅ@45VYb0+a  A-=\Zd? @)֥^`= PE6YUeW  @V{~ۭʗH @ hES,=r R:3eo]  )O}͖8{X"' @Fz VOl]XJ27*Ah]Rob=,sڇPԹ `y+Fuv6o 0T]"mQZNm}Ѧ7>UH 4讥sG0?TIg](?퍭hZAצ#`<`jv$!P5"mٔ:2z[_,w!!\+EEk!f&Mzfj& 7[AjV@ صSyV@6ur]{X$l,[0֗!!L[K/qzE;yk 0j![7O`:Uݼݰ5 )ڵwIB*ExԹ >I3,[_4ַ!!d\wb""O'_+l[RMiZͷ szV @ZXClN iQ*VmX(6vl} ȚeusEpOS ]w]d@equ* +uUbakHD:Wj7[>g e&B !Y8DT\izLEWmֹ w7]j%ZA@ub_ @ؾi޶k)  4ƣεX7Z.և?z@$@ w~^;_#KR&;U˛<Zj 4ZudH9hϔ:(Jm<\t'0֗>MO@ jS lHRwP_VW_~|%PUȘ}??kH6u"&l}XGAX_ Rݕ m((H1 ؉9G@@Rx VDZyc} @R,~b!YSH@@h-GR籾HdZgR8#`?7pAG ;gfeL::^?IDATJq-@/! {k{Hʦ F[\,˔ PR~~O# д@m59[$INyAlY:'6OB j\* ƫ2hmwg}-s@$`~PtkUs74vXTM Ӫ!PB kfF]I9\ynȹ*݊[[sNNFn0EL>G ae@BV\> @L6E4ne*\9UѮR\*u.r/tp ?{D@PׂiWY[vm&!>Т-' [Y+EZo}3룑@J7**?^[OAGEpkaj6vZ ޗ|չ59 >4&Z0Mua2y΅@Qj TG։i~e}3ZH kjzb"| _k)H lCsPK7Wld T%]p 8M@Pׂij ~p.ljVX*Q6unlfe}/[nև#!XU%T1O_$7xf(sa4zs5;-K@\r ȴ*LEB,OXC߅P"8JUSix2"s(^s\i D/Ե` $I0]I9PGKVѬ\S !wF́$b}Zwb)~tȧ=C>C5{Qx4gFng"sUmY8Cкmq"Y@cԹtu;)NfUsq6?TqWR Ȕ@&ϴX *i cG(h]"kWMTؠgؾ٫Gw&P:6 7gKBԹ%c{k~vDdkw)~\L\A;Yr5z#!ݟ8QQn-&UY`OϾlnRؿi+/#Q!Z"5syn;(vm_Z{I4W:\loCY;l[N eT@'86u.uf}:Y  Ux\{ kmhZ__$Nʎ"P:WRk'W5"UU4Y;!|+,Y;6@ʨNrZ&GhX 4&^ n&lSQScf-HDkQUQt|Q s^_+]a kg_Rt29\EB@1bQi˳-916vTRQjlދ]/mIT*@TvB습/b ƣeP$@ n:A'*"~I .*uFv;JS*)D{6Z>5NzԹYbOnC넿#AʘNr\whu.;Υ%oTX$@ 2x_7PR3C?om./; $ε~j*"}(6oض$*U*X[ŮuAʘNrZGF>xO״>'  _R#j ӎ&()m׹Z.2R%UP9U*{kŕ9v P碳bNBDSw"S61s9ql9Ou뫒@ ]Um,ΞRXݥX_AJbszw(ʀVg\rhRR {@s~^(uhl}g2"c::H1i}N{X$ [$H@{s.)K.^DAJ@ek¸OUAЊNd-I&>R_/l sQF_k?X;hZb e@m<\MEg9}T6׋v>2 .`nEpDmKnٜm'>.qV@_ k0ݨX/< h]c$Ն"SG=M ֑J:l<+}\3w):ɵjQ2P(EOZZ2O/zI@W6DFJuB. %vk!V'f).Q-jVƜ(u.磚^M;y[GB *\TΰF-6}R.6u.u[)EkIq2 *pE"y>0=ARV&tN0EXeRFЪ|Ll86OB j\O/;)*sQڡYi LB bK b:=v uRYRЪ~dNtڱ}G'!u.jokX{$OξKH)H`:Tf֭j}RmיbS;@ !(B}h6_ڭGj(EA#{Uj'/%Υ4iٶiwHN6CyRY{%>+]BJ@xԹ׫ n#3y;[5?O[un}EGa 1IĜe$ujdGW@\ԹkskX;+^RRƣΥ~ ->I P#-tܦp'j5+Vc=LN w&DRꭵW q P⒮qbN!T em<\JYٶy>pc֗>5 b}hZSc'GcI)hugުU0%K>W{9H!j ~E^kjTm_1 z:lemUs;NZ9Ǔs87(Q׹槚S窩Y}QgɞΕVEwkI5T:W"]~g[^F1߈:k$gޯsJΉsbvnu5}Tѯ~oԹjjVo_ԹF{R PN5UU6!:g}ܖ fY )m}IeƟə hqgVrM97Pl`/ZoUEUznZOxvsr%Tdd\ 2g!&'` 87ַ>n%^WPJʓ)m;U]ɗF#쇷"   Z7xtKd}ms[ߛ<YU,(pYK~~R/֯0>> MsM̄:WSHm"<:u.B2vM+M*:!;kjܼ7d`g @)[ϟ\mk®sK’akS{T>ou.,rrZf)GmwrgS`y^#!   Gֲ(YY6 rXͬOZ9rb? R@@@A,)WYxi[,) h%,˗ b5^b@@@(P3UEшvpcD@@@@ qe     0,E@@@Fh\+    @D hEn@@@@`@+W    Z[@@@@hЊƕ"   D$VD@@@@ qe     0,E@@@Fh\+    @D hEn@@@@`@+W    Z[@@@@hЊƕ"   D$VD@@@@ \ hM3-X A@\VK/gMDsQs @@65mW%ů)"?Gw7y?}knս__9'-CnԹOŋ%^vcuk/!kbsܺWֿkX uuo]V,pܘj]&N|HM-oA#c*=ɛ޻|\ފNy*vb`ŏ\ڄ4u#.y3ˍgri^'k>mIS b(ҦMbH!izb[Zw)*HUЪdɒ S[`;<_=M5]ǝn*oe6pӻWi+@=pw .}9}13w#?rw1 L=7q7}>f1pr= 7jGn95Gu۷nj۹=}n;qO,~\hܔ@+TXۊ繴 ")>GLtOA}8ܣF^3͛0CsMY(8V9gm{EWŁ n_&lbXE^I/ݽ&'F5958h-pn- d]j7 qWy;xj !M /p 9>`Q/vnWZ@V'u9_9p.t>opȬ ,ݷGsܢ de&<+Wv{ݥ͹=4& )xןQwLrs<)9eΦzn/iDqdfNZsݷoԭyYgivusf']ޥ!3c۶p_ڱS[8o>vi~ޛv{;wHjmw&(/̿:EǏrv>~xN~FL|}sWՍCܙϜ$/Yks;ܲҲfM_ic!7;]quJ#/@k^p]wwe᎜+U]r^wN(\;ϴ'YOqiƥkdpů_߭wM5NSQ迭[v^s7w'#7yU&:]+=p)AKv )'wlW{ז ƹ7{]yNonĜIStͽ|ӍsZnVߵ[nݟm/r$0AeQ~ hO*VTTuާdS*)l-(JB/ŲKsER:ݲnXߗܳ?ȽuX.u} ss,tOϻ>=7{4c?tzGl o1M)\ 㦼=;7wo>н|(7n4w.öڌBosKr6$7pl{]m^q1-l``knС+N^u^kEW5l:^o?D⧊(d_o))(lܿbhNqOj=UU{SXm,-wQ>,OqC'9I[Νny:w{GqΙRaP!wU ӱmG7`7]6 4u7w^ri?z'_}xonn^7IOп(o[kZyI?y_\&*lձiWBaZ_+j˯Zl^5έF\6mWpB\+X4f;)\?Z5 ?7.tm`ayϟ.yO];C >ki<Ϩ{u\==F_(3mnv.fAKI[\Is8^:~o[(Ӟ}.^Mys{cv]tp]w(t޺s.{p=< ?Vv g/rm=;ϭo׾W[S ? ˕uVX :\iT&sz 0W PSXZ*P<8La%DVlf Tݫ8@a(W\ݥy0Yvxue+q+Kp]g^(Ӱ!no5gv3'l luFvүwkh'G;hԖ۠:PwrWf8yDVq_<=݊kw9p3Fue# νp'+ƻ[vq럸ήVf}K_2#;]Z M1Ea?] ŃΊ6lc]s+l0@e $OOaRvzd wYYJ A̲\m}Ӻ((Yz!|k߶;lPɃ^_?t7Tro+tv̮ݺC0g{`qO7ba k>&(dZu~CѬDk~}ӡϞQmfRxvS<+ OS-`=~mwWq5P;rt-kסw❴in6sg~!%e?EuP_sųgzc]Vm ?m" oޯ1f{|Vq_e=̵^?{f綽|['} ۬ܪ_w#Pt@Quovv:xc ;8 ʅ >bw7lagϫv]wmׅ޿b\a0k?wf:g7 wZ :de7n"s^u6n'ƹYpմ_o\[ر3Yn඿jkӾ:e{1}O~ckף0 \6 鲻\?6m]ێVIku^ &ݤ Gj~ E8 nĹ5LFv]Xq7N4TA'9w®{l|(ŵ)F*lTz[/]/+vPXUE?ۯ{(l[N@U/0~ Uu9{'ά~:g_d,{qUsN-lҸg U'nۥY¯vs.,l`ק[ OZuwoӠջʺ3s'|1nݐcCتp[2fڴoU,;l ӝ{sӗ^Cn-K3lj]L[ Y?ዛl])9Xmj[o9W3~Umֿ8\S[elzWaGWxQqb&Vlʦs/(6Q<Ś K*k'((,Rv/)ꪧfJZV0-}g}QljҩOӾ w%u`UaF׵ ۹3Ƹuz'ަgVYw%Xti׭sӶ>wR4cX= ;`cTLdϹiU Wap_L Kݴ9~Y~-R\d:Ow|7ctw_~㴊?7M06h+o}vsJY>}8݅e] jp=_\] Ws -p꯮^ ? lƷ*ۥ\`:wV;+Z3Gϩ\e ڍL'c\T't3ou_Νg'hab- Aƒ b c .mBH^5n ^?kj{dׅu3?V\קuO3twuu;{C/~n uZ`Xs]-:|ßnrrҾ? X?ABe_hN 5(bk-۸m-4g ] 2MoPs-s;B]07wJJ@"ףw_g<XF_rSX!)W׹Z%?G>wZcFCeM)]^uz{gw;X0&<[h[uZ7q9k2׹5eM'w+yZgU:VX v Z^8Hq:dOD ﵂75{UYId׳-9_ί{w5n+;i:'w/W+ޭM uӮ0}+ukFS?nY\}uʮ<ΓD ! *VQ*ťЅ*(ՂFA h]1U|`%+Vj! DU y-k YQ&7d2373sηW{Ͻ̽w1\uE?[Z [,.r߉H%s#13qgúg?&kusqω& _U ;iw}U_6{]x0wZϲ&{?;mLT ,WԆ{9\ug Lne~k(?ysߙڰ;ZEf/]dqē2fDoܧܷàJ]T{, ?{ !^\?z|:~lcQ~ឫ˩1V?9V*9VWbXmqn$kKLg2}/\v'$FGz=>~3{D~zuD~b/R"Z塸sdW[ WLf]ɬ@es);oKRvc[SV%=ϯ{)vxj/w[Ɂg=/[w!d6sY,؟ojM r|yY5[njWy9Vn~g| ]w>:CqVκYyJΤᄍr_VcuYh=9U}_8%랳'+"v[5Hؖw-9"˱F17oWF|#qgEz/UkVV?=im\{32Zr#%]^XW3q/$],eiu,Gk6gjv}eN15+im-xW{]Qn[j 6fZYbE5ĩg*?,a`fm5ql\xdžg%qS Xk_WDUmȏGG-XqvDӱ0= gc>(#*Ƅ9a|(~Y垘E't2m11լyY_ .>8fV%2ienX΋ΚqS;̟N'zzYH̖}ي2i2z\0u}|9KcNmp`48_Gb+|2 #"2tf"x ⤈ҕ6:<1شI'ψ8!o"WFdD\<)/e'ƃuq<; 떵n;:Vns}daɳ?꒦ p9h5VV&z\>a\G__޽-%?^&%+_{SʕUM8q1;G{UWjLh*95?|=ߪh9o^s o|:w\QM1H'@๖O xWҾwӾUsCswŬY>'.N-՛d>qkg:K]۸2윁}O-o=k\Y.9C[oz}yg?|kwP[qksy3̆ojmj %^~w c%gW}a/~Dp~<%܈,ƈFdɼÇ#v}J&|I0 ~`Fn9Sq7/?<e$vƕݏ۱Z3jC\Zcnd~35 g*_0z˜w(8SkEcU&N#0Kn>\5;UWOlLbq՟ҋU'+tFYpb(w\UUUl_tbiqnNֈ"2s\Df]ry1o"oȏQy2-"3rL7Ur)['(Gd=lB<0EBב|ͺ>W^|'weLh[hQח[wVqu/_5 q>γ.Zm^_V3L6@q8vcUߗ'_dڤ-1iL񩘍lǦw>wsǶ_wF>Msqu`\w> 82Tϳv=lzLf낥լjy?/cvx.Jjw$;F": 6_γ:X;%&K&NU%:H^R[E v~Yk3N0]ږ+m6^Lҽϭ,_Dcycƞ`ǭg' E@n@b!:̜T2mȮ4RW_mM+y'N:i9ť̥:VgNvbXqTɳ*9^V᥌ELy6h6ϤNDXJם7NtwlGiye;xv5.F^Jw#Jw G@k @h',~e˻.k"3ƶ[~UY9^6s>˩-f XS emyf#Nh֧O^sDGvd,tGܺ6}zs=<ȟ&;[N@rֶD' ,I3! ̍g(- @ @6 HhӪ @ @:/ yc[ @ @hV1 @ @Z7 @ @6 HhӪ @ @:/ yc[ @ @hV1 @ @Z7 @ @6 HhӪ @ @:/ yc[ @ @hV1 @ @Z7 @ @6 HhӪ @ @:/ yc[ @ @hV1 @ @Z7 @ @6 HhӪ @ @:/ yc[ @ @hV1 @ @ncۿ" tT@(#bQGZy?.: unLǹ~P,ꨀ\Gyrgh ͛ @ @FJ`ڵkWmw`M]5pkg6uUe]ۦ5ƳumklmSz ܚcٺM]56z{Y׶knͱlm2uzfeIMkύVFgy:講>7:ۥ{չuGg.u+}έ;:ֳύhR_wO7{ @1)3ޘl6;MЪE3 @ @9Zik5%@ @B@Bͨ @ @Hh5Ք @ P  Z4J @ @# ՜VS @ @@-$jь*A @hVsZM  @ @ЪE3 @ @9Zik5%@ @B@Bͨ @ @Hh5Ք @ P  Z4J @ @# ՜VS @ @@-$jь*A @hVsZM  @ @ЪE3 @ @9Zik5%@ @B@Bͨ @ @Hh5Ք @ P  Z4J @ @# ՜VS @ @@-$jь*A @hVsZM  @ @ЪE3 @ @9Zik5%@ @B@Bͨ @ @Hh5Ք @ P  Z4J @ @# ՜VS @ @@-$jь*A @hVsZM  @ @ЪE3 @ @9Zik5%@ @B@Bͨ @ @Hh5Ք @ P  Z4J @ @# ՜VS @ @@-$jь*A @hVsZM  @ @ЪE3 @ @9Zik5%@ @B`b-j9? @ ]ZC7 @ @A|&0  @&^Z\} 4 !a @@cs^~S ENIENDB`concread-0.4.6/static/arc_2.png000064400000000000000000001014121046102023000143730ustar 00000000000000PNG  IHDRr҃sRGBeXIfMM*V^(ifHHrK._ pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATxy`klbc{4F5{-[ERwwa}サ}ݽݙ{ڑ@@ MCsD@@@@R?Ԇ)7< @@@@X l\MW`VeN    @V(E0e 6EBl f;E@@%r̲_&ͬ *0Y}vl_C&!   +c`cffs}~Ǵ~bAq<34U|GN@@@pr!W~X X<\Җl0,mLXyucލ +E@@HU.jo+:I+֪px`;I+D ""^* BjYhw@@@hZzz~^Mg(p!Uw5wCi-#Vޯ6}m@@@*`_)i0 1,wi?F&jy i+b+JH   X] ^S?:@ 8v.uI]Pyh~&EnOEWHQ@@@b.7X?1?$7(t(f:iv~<(EK|)VP > ;Ni,   M 0Vg~gԦTp/U,F|y-6^?|y$ ^   PYj 7+VNT~ZYIϩ~ٯJR)Bh$$d@@@ !( `{*HjtsB埮. R|PV:j'@@@ +3~Ӿ21%onR Rbu{5mЦMO@@@(Yn@B{}81{\a?@AB@@@M[S;ye_] ߧD}I_gmxe   $A>_E$d>y쮲 ]3{(q.;>eGB@@@hfsyV@De}GqV~Y}L /;vtC   M߫ MMĔr:BS2A-)0P}mt2@@@R.ʷPw(kG˝(tfi~42=<^m&/N`mz?fS5atd   IRw$Kr2 ;8s`"~>X D[:f_k~X!  T]`MqW}|C2vW l^@~~5mLay~:fvWW @@@ ~;6}L %vju,?Zy   @:U1{.D)ÉA.RcM~q]\ U+_bW   P?jN}U*Wpov6ZM Ⲭ}:vv_$!  dH`5|OaZ=Ci(2?(it4,2\|1" o~_hC@@V,11/*,=' 39 lOd#`箙7Cgb!E@@J譗Sy,*BMrk7c5;-Uz}2?vN !  P`#SX,鷡c9\p{`{7 ʏ mz[Iv@@@h?w/s6um&MVwkwWch/)'(@@@ z"Kl`<=ex}PSͷ+;wݡM߫&   t3Tp-#H}H tЪ_Κ1h|.ETɮ g3U Mx[ O|    @i/}eYy;[\ܦRhvM_][@@@lqMIʺ|Gi:l68ܷ2f+5X^~W[>yb   Na QQc|Vz2GlrFFPx+ZB6v$C6t|XRFy-   @?~gۦ?Utke<@ xr$-ˏoOZoc{=XW   @+eZ%^I0kIGn<{eUh}=_6}   PA+f[6EJ*Ng1cW:賐%*W~50G@@ /!)Tͫ7JgQcYS+6RPRVVq>WڦMW1)    P}ѻ=:YaU4t_ٕs]03t ~ L*T߄|R(@@@Ji?ovSMV2BnNJ*VxYY:K@@@hA ~^a"WWxo:P*()X {o{+H   E MUXY-*0EI|__" Д/Cއ(86s   @z5&h~yH*#k/}?S5)&\ܦ_T+g@@H@pPvi/8 .5zSk#v@vp?6}ž#!  V|'=c6+Xs~Gzt5Q:?tLOTl)69Hm    '`_gyI@d:_hO,,j-r@Wm#E|re   @PnU2[@+x¯-g:ݿn_}k   PMWbjf}T`o4}Ӻ4GE3 LV`l?4o?@B@@2+pJwlc}݅'TTOBeE$`ߣ˳#$@@@ s;s~'Q#`۾׃-SĔ16=-Whmڴ{   @(~ɾfM A^+"p׃7+U6Ri.tN^ o(::d@@ڎe/>I}uɯ G%'Uc|(}   iW\hVPP]Z l ߻:S쿪5S v@@@T ԩTw)RYZ Ukb^:5+kv^ { i D@@P;w2n,|N3iRz#~]O@i@@@ ~!#ze'ZwkL hN̓m#rGn:Eڝ$@@@ M.*< R O(h誘ď4OJʦlpR[šW RvRl˯6ȵ~M  TO`[Ūyv9Pe[YEx];4Hh}uNfq]CSx ا0~}J U$@@@Wk֑Z cuL*_M~ԕ"+5*$ LB@@*"Э"[a#Y{吲-@/+"^ =#@:묽Sǹ/8oYJިsE\족us E@ izҎE 0bTegU uE:W$UL:W1J6T@\`@+C"Ukeq/wZMZTLzlZbiIIk^Ǧ%A40H@@@@ PY)* 8HP/έkk\m܃R %@4 X炲Z=@@@HZuA=xsk?QRVu/ϝ@ܢ _0 ZːHצּtN3=I dx5I+PL_]Ispny}zՇ;΍ԹU/_(=vs+v<չu{n~~sZ'A^:X-ZܵG9צ]VG@^  O_;wtGI;Ac8w\9$pӧl!_Zas{޹wͷ˲$u[0ϹZ`nj>@ʖ@u>3c6a`ڧܹ`slRs /e@J hU 9ù[Ovk/'fN}Вuҹt}]Cԧ:\n~έFt>;wcs l+P:w>qG޻r-P:gW7l_rKνwqC˙J@5\8JYp_)X# @a 89W }8۹ty|{T[zwDOe޺ k/Uڴun>i:|5c.ժs&ت6/gOӀzbҫL|jֹ 6я_l}OOa@ r"'f @Hӗ[f{O,}:(}ekc`݋sIvco_$Z"P:gyO0X=>.nߒcukQqn3j_doN{ڹ+$]#P:_?ANJ D$VDl((ͰcUse8O't'(Eun:̹_qnc}ḛ#P:g+~_owkns>O,Zun'=|sJTvγ43Zz]6k/ @Ъ$Aڳ~_?\˾:eӗsA4G !RjֹoGvpn }V8^[ TW=Js[d_ն/ˇ h*~VAn`᧿V 2 ZY@@ꟲn>͹wqwb|k9 RYl0ϪknTJnyMU:܈wtc|i_C$eCZu߶M?}ɹGZ4~9@Њ #nn>ǹnC jX,sA  lV2F?hƹaoׇ~_tZUkv>L_-ONU6صُ*Lժs+螀X~3~nNe>  @ hE˶@@U0Ux= EW7wYŹЀbh OpGX궆n}s:ѿuKɕy2]:gQ?sd@kFjQL~{!z޵jOsÏ/n3 @HF@s9l tI_.tJ^P}ruCϝt/i|9/ =I\uǣZysϧ5eC\F\> u// - PK69J9hڹ?סs$JΕ*JVk^딚K^&jչ4Q@ $d@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX@@@@  h%R@@@@ S hepSX2)nܸqn,L"@>EZ~ⷊ  C1cTllP ,HTK uv jDX-dm"kYJdm*k[YZ$@ |B+L"yU%pb? _Yx񨺺e^Yfn͍;6C7+BM%nԹܽ9-މ; S"C{wc=ի;Ý>[2ud̼Ju0'+8Sq"|skO$Њ!g T#o0kߢۯ,3eNP̞=`m>RvLؔؾ4-u_bAZ5@ ĥUUqD6 d] )1ZZ[}8չ}bG=V[:Wmq H_ESSRtcV8׹ ftd'qsig {uҢl@@m㯊T`{ބ|p-Z靱=$:~/pZ uvͮ&!Pi9kYx#! XC%Fa7 :/Ŭtvψ|_˺,d'Is xL<@PׂǸծv=i{o@ꜵ`ڂ&! @م銠!~\S!]L̑T $΅< *VIsv}D8&!@뜵m"mhmDk+@@/V4\qw}@D iu.>" $u®{@IsƳ^F^ k;@@ tҞOWLR (=w"G(_i@TIssT.l@ Iu2m ?'!@om8 u|6hmHkK@@ מ~F7NzE;E\S+e." Q $i&!@]7a{@IsXE[[ڔ. @T/+Hq")FTRTz:~P $%=NZ.:gmBkZ1mɀH  t'kGkU6MB j\l XڊS[ D&I["4}b %@ 6MB j\l Xڎֆ 1ֶ6& @T`Cmj"GZ < &!u.ja@| imIkSZےHEk2 dI:6R=گRR:.F@@@@@@@얽"  Pmؒx! Y^ᱳӒ6VANR HK(Gs?DdXuɮOv !P 49km؝>@@) Mw>YHV_ɚ*Au Ruɮv !@Z뜵Ykhm[k@@ [LO* : ~~rQ P{lq}q$I@Z뜵Y?mm]k@@ ><}o Kx~|s=6 8 (u%$Hk6]Yھ&! @UU,Rx="*\_6O^0[\l C v] \eQYڲ{)hm`k [  %ܮ/ znGEn*lEP6{ !u. U@vSQhٹ@is֦fmck#@@ vO"O羧HKYh;ZAB Q\ll uʮW-;+HD!:gm\k+V ! @ >+/o}a{i/uSB:*D v ju5$J6}6?mmfk;'* @d i=o7vτi 6}R޵Y@Թ ,+`׭̮oi岥eIT lm{,?'<  e[U<~e)Rc?P u-xޞ'eg$QPPe  @ z A B\=n~ #$g\)%B v6` ! (ۦkPߦ藨RYȚ@(V~[}{Dm'IAB@ V`~!ź/D=a@2xzlH VW\鹍bgvyϊ={̠u=|kx-ε+Xۮv=몝HD-@/lm}keH Rr5K\I R"pzSL!u4=<^@A] e״h`fW;R:״/W ! [+m.kv os$Ε~%^@qԹv묝HD!@+^'xIg/cM@} El2J~R#@+P߫mW#мuy|ku4lg[~OeUBm  @"S.Viي(Lu#@޳6MB j\ivi~ kBUZ!@Z ^Z|r)VOm!)XMwVfW/d P*M쪰XE\nN:WD!M뗏q~(Kp֔&gM?@Њ6yր5YBD;yS%$w^)z3@Թr~&/m=U+-})"Ε^fVGFhf(Urִ5 heRa]Xĺ&E!!'I hl'kHMq~$Hk`Ǣs+kErf(W:W G UvR|*a,kYΚx~& |⦭a] Yĺ(U. $,ȶT,E]%,r0&=^Z[`0K P*@ v}]odc.۹@luŰv PC0Ĕ?Qb!@ :y@¡O;BiN Riea_[x P%bȺǺ6XlkkB~ sk[v 歯s>$@ ;: u@~2x"8ч깳]soP&y%M PI@^x*Vn|]\JB kZ[oRʗ>]#H @6tL֑:zݟ.T#y)lM&dY:O@hDjm!@W$+ `mAk:WbX2֧ |ÏGx-@t u'YZvB$UX`'m KN?IDATUxl|Թ|*,C<~}y=mk;ZڄԹ蔭Oc}(/DB/_ґ[}2EO @Ȅ@Jx8vX:籾/DB/Φ:zG*VMQ  @c90X@$R-wS}x[V8kP~ L#صeԹ ƫ0(C@x XcǏx @*aqe .'@;`l@@;ïծgȞL:6MB j\ hEPj6%X_\$@ Y1nS*~#YU"@rnܒu-xLJcOa)X!2@,_a! P@ ;((_) 6ߩB @9y7x,g[b<iF@ h5cd\V\> @;$u"|R #\xk*w4mSU2 W\|Lp 3rVi-gU%Rokع@ԹoV νB-H @<hT@kkC((?[OAlEp .nKs:{?xWM|h7+FB%PFc܂ \87 RvԮڱxzgO1~9$GiZդ<,u@ˮGkع@TԹd߮iYp$@ :K)W(*?T] X{ːu.;|g.UsMƳ~X$EZ6f}9 >OGBj+PFcS-81C:[iw*cGBj#3S9b3}VOO]5HiS_lP}s@ܒ4u-xLCj^aƳv@pLGk/9@Թ(TKߦͬf}~>Q U/UtFҒQ6>QnNN yK l]>0Y*!@beagS׆١,gM PҩsjI 8$-EA0\AkepY>yg_C\Ya[$@z#=w67~JˎS,RFAMٯ|Ph6Zo5Q&b$Z*@kX;.cS.`|k RsEYr9z#!?U1Lq"ߧTa=ܪif*M_iZ(@k!XWo۹9|hn^c%!R\KX_oJ}>7X;b@(ްvk PJu Y+H @*ة~cSɠy4Wu<*@+UvV>^SPt䀖 ']aֱk rsZ@ 4vNm -L؞@9ԹrjmQ6.{NuJQoJhYέ>WX?gP @"j줙,)eQ 7*m~frs!RYԹjb{Ee. rs zвBOk_9Į $Εk@4 J(nQ`D1Mݕ9|𲛴22A٢%`Z>7?-Y@Ks-+a}вXrk5@9ԹrZ;֗Ea}Kc@hI rLi{UU8+tOߥTd`ԹdՔ oֱuI+@+WMՀ֎w/k rs ևi$!-h#mm?ӗlO]%8Epz|^uRMzs8OM/KGQ)ELsiв Es {%v !Pure%y$! h-we+H8RY*t-Yw_1Qs :XMd>/;-εTՀ>gu|ꜯiCZ_9%! L\ fYGA@O:u ,Ju.,:SCBRԹJI؎Kx6]`.XйrV(E:WZ_c}ɛַ,tΰ>.  PX@g/)v.|27N: R|(4mA۸:#SZ +)m &sM̕вv '[sy%Υ[,tx^m !@w+ZY֭(|"VŠ=kuP\B\lo J P*-mOoJh֭0HQ\rfhub[紾gss+E1'1 "NFzQE-7Q!Olcm$x2Թ di>_2Q P"9 ~vDS)v !PudQ/T:ROg$ȳ}dsn)=Ww]~1$8<z5#Lu.]3(-#!u."Y rm kWX"8g4hPKk/j}B璹zl}G.KZV; ޴ d$o'tgU5,__ :SqbM rs )k=mږJ:. ro kgX{#h{]CH+@+W0A%ǭ^$WMk#m`zA.Ϣ ?I?W@sv4 sm1]ۤ7akj9 J P*%دگӆ}L[18I ޤO,C%Y6]`}W>B;B>_kٯ+HTR:WIxnˮ9ŦID-@@u~D6ikX{$8#̮%$*)@f{ӎ:ʯ^TNSYj)}:-M PIóvݘM^CPX \!&m۾VY !P-\cƳSbUR:nm}|}2C[_5BmwT(ZB8_bgD:YD1:$%@txn]SHTC:W MM[_$@ESo|ϝV(ZB8_!Y$)ΥRH@x\S[Z2\\Lyn>pSˬOm}kR:묽S6ˋ/8oBҲW<ܢZbZxEkOԹʛVbԹJf۠ՌSs&RhqmmɸԹ(sQչ.*pEJ5BN1-u׈ϴ_Z1)E>ԖS*Ym5Q*m:W#fvKk+.@8)lF: OW\:WqR6،@qK̲VXKhg¯s6'sl$ϱ_lxTa[>dŖ2Sl-cr}c+U,ؤ\el &A); i=6i-W<qIǶ_uA)O_R2Z+(v(#B/E@@@ ַ>npRd}ms[ߛ6<8"(pڛ}GlXnuC|z iX76Թ&\m3ީsi\HRQ,ֹzle]k^9"E8sm}yiXXOXŚ =YQ(YDŽW<|2Z{+R/ZpƒUT=GB@@@ Edʲ`+h}^0cQ('+T,*@@@@Y Ňִ>*-:U+$UAF?)@@@@>8YhS 7Ff@@@@`@+W    ZY@@@@hЊƕ"   D$VDl@@@@ qe     0,E@@@Fh\*    @D hEf@@@@`@+W    ZY@@@@hЊƕ"   D$VDl@@@@ qe     0,E@@@Fh\*    @D hEf@@@@Ā֤dXxf柌'O:cS~?  @! hM3uWŖyhJnj!P:7|'r[mswS/|^ E{}}t՘ 6EQ\T2ZD2gj^ysw/|vx??81f.v/=`wg7~?f̎||b Eo(HЪbx2SK6'hv_zB7~Xwu 8Wx-xܭCsSZ%Z` o^6辉5IfΟNWnݜkvZZ9w==nvp;{ˇ^nFM:쥪sSLv߽{n]{8=ٽnwqVITR`̅StӾ^TMZ Dߝpv E5U~ NŪUdja?c{OV@El䅣?uo9µ=\!CݧWīYlj{v;Rjފ=+W<(1@RFOmsOkR?p{賻݄Y|&Nx,X40PW} y ',d +:lutgnAw}s0>Onq"'um#d/QPrܫ_p㊔d[`Emo?a +۵!p"^`Y&uw>0MN[ݷGG}`)6`Ⲇǭw)nX‡?e|;6uk^sιznȄvOz_9a}|][կ9/]`?OO {vW[vwM ;wm' νic_M55g~l_/k~FV-'IC:#ףc%Y&j;栧6]q܃[~zJ+۝inI!7C H׷:nIO=Ww\e"whA_o}k׺]n?Un9np6pкGwƽ԰񉺿gFj0<)ѧڿk^WS_ݸ^w97cOj}>e/vzY_n.{zx060:LM++uX=Ń=+yM6{nJʻ^LU~®Lj;f5]<4:wǸ?-m4׊SSy58Թ+wEWKrg8]wY+.y ,Mzo{o_/|m:iX(n]_}d7Ou^Ҙ!=܂Y v3ڭƍxhR9{=^2JԹ/x~r"H Uѡ K1IOi/* o֜VbE fRqb:M6 eIa- [f ֒Y;J(l7*Z)l RaT1B Y+jV/)bF/-|SӶR&]W-}q7n!vnwf^0m}gs ׽y6}>v{wq ->i{GCmVAf3uua`4YOqSw?|c-f;/{o?>t>|ry/Wuݷ?yU7os;\mqZiNn77V}Z`,ZE|DJ8?@[7Tzbbe%<4~'WF8^k87c9 q{ݳ2qpWұ!pUݜo;70Sögdzj$w䫂.y\7an}$ AD`ֵ{鶽u] NMY'zkצO|٠s${\ Ƚ:-=v]k/w>lħt -, 8przrTK6e[ڧ||_wǺ}=۸_?;7e.uNt]3F5L?6~nC&j šԼKaMEk"š殍,Jy8׺M uc:gS#ÿRYkNz%Rvk[n9?v~Z;e;YkG3ؽs܀׎u3Fu_ϭ^u?aUw︡7;[#`WjLUӆl-OAw'O/ذ `E5,;Mv8KaA 6e))nPQ1TaduB=ap))vRi-E?ۮ>l]aqy}ܲÿIi\ngg0Iȉ8j-ߡrn6umw7Em~}ƿ[='Íc[:s=[iu;wM6t:O^M:}y t4]v}*˹}\wwg+q\76=j]O}ƻmk*ԥoaiE2T0ez$ jhlÅ j焎QZz-;X}E*@jߦ} ̲YuvrnǼ:y?,̲'~ՙm@a[_jPέݵ_ri׺pKsYsqoy57'r|a7Ƽk"w&'lfp('6)~koy_4Ouz1HJ}4VY%Ycwwz?[2 {5l\^4߬dyY-g˻!u~nm]u u.Q:M~s_A",2ogB@|0&ֆ-u Ҵ}Ei) Y:Qq5q IJA({xU% -=xEa- NuSf$V?6i=iKΡi-K5|L[rY{=klYd_|zߏݐǸ~7Z噭s\q繝U)SYQbncgέZ {-le:ɾ0kuY;X{\ZnVmGu'ޥ{VMvO%J6ЂEw˽i#rUOLvj+u @O1]{{l$i@Kb[6}1F²}O)U (ʗVBc ˯يK֌efp;3u[k|7n.ԹH5\PmlL[}5wV9+;+t}:қg=K営m=)%֝CC윍|*cD|uP,zMzumðyp`g΍д|o=4J44 5zƹ~uyjսFVmj>' f_ݲuOҾ?颛'Fw*6H/Wu[2yiV#tTi[ݷNȲd_}WF<:)woik(vBsn$8υC=4}h'E*ͧ+f–WɺgG[{Xt_,?};wj0+u=o' ůuvK[CJGZn`ѩjyor2n_>[)2,b$N`Ul|C϶޽@UEȃGB DQQ. Z뫵R_K\J-Py? 3s&ޙ33^{Ϲ}9s?]=<=qV2iƄ/w}xGZ6k#fIӻɭ{ˊWQvq©U1窫P>}0#h}{\߉q# ⠈gD\:13??#9>/!"z֕{cɬɘssuPwgi}~l뿏O:қ?^ƏK'c«b5a/wQyϯו,yuI ORƆpQ3K~YM!;Ǭ]kˏ? l”7)?{Y{Ͼ1̦jٸ_06FQ9qvD~pDgy:[<Ƙu]moojBUG}% ?{[Dmּ[暳K&h#_*9a~!wWt.;geiW&N_F)܈LFryV=͎D䯴3#Lxką$qDQ%2ur>rbħ#&^]/c"8/U 5EdyYDwe1oi&YTHP9a2.j`_5U1['XT5WoxdCW-;*~xa5x-McxT{s? Uk~Qm/|]rt;~8a/ikV_qrVĬ>11޼N)o{yw]j[>CU_wd߃s[! 싫׮*g,f :9nÏ>T+V7^}ZAl8PAsC%=ssWuoj繭vI'>ݺ%}{/Gu/YyX>؇kGWWɯЫ# ܈vX-"j}|M;`Yrᒭ>}nʴI>Ww[USլGJrIwqㆁ6ŕ\;0u9ĩY5b߭+¶~E5<׾#R>N}b=y< cxeD^tNģW#J)U`:1WqdrwZ֔r{u%Ӣƭ̥:VxuI;O(ɮNle^0 8 >ц=6W,]|';sfY|j,}倸bCG+=$@ @@/s*Ϻ biD\n v mswひU]ms4<̼wU!f6onP Lz/-+njv1;K&So/ǘ Sc:FksXL@y[ɿDdB+o!OIʑ=1ƶ_jor@Ulr8 yK:56iZ9b󆼒Mm3Oo g}scŇxo5 0bG3C$ ,I!¶.#C @ @Q@B6E @ y  @ @@$ڈiS @ @꼱= @ @Q@B6E @ y  @ @@$ڈiS @ @꼱= @ @Q@B6E @ y  @ @@$ڈiS @ @꼱= @ @Q@B6E @ y  @ @@$ڈiS @ @꼱= @ @Q@B6E @ y  @ @@$ڈiS @ @{ދ<'Q}6އ>E:k}Y>ZDsK.\ҹ2>@sݮBkؼ @ @`mڴip~m3ZkkGkیjZֶ5pkmFq^5p-Gkی5G6FFmFq ܚ#cmm2uzȨبeIh 1 2 2 5 ώ3)IDATx UgI@ @A]AE ~ DPQ/e@ B a KJMOtW?ufs~g{ڌ ?Smmm=H0 1YM8Ž@&>5J Oٹ\uf DA(PgG+?iKB['-M6'B@ +P] hujaBEgL2&5A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15A@7  15Ao7fsg#@Zg͑@s|'Vm-!M j #;A@< 婵(+i [fםn<)(џ˶2;t`0o-^ص35;cwqO6{:WM4A˿c6~ۆT_lvԚf.9lZF Ua!+t]5{uOgͦe=v2{f;zfTs2[Ճa#n&F_/52gf & @ C`f_l}Zcʜՠ17uau4h-v!6A} c@:Թ>#@ v+]YL ,YGsh67X"f8g40-Yt ,TQwgez/W{VUf ߬\}s^OtMW?lV[aM}> Q'@^ lӏ3'`u__ vOkFQ=Oȿev?'d?/tJ΀\f]|Gߕn:lWߟ>Av`,hl T " us+~wmogɿf;c@1%H̺4|& cqvuq{ХO͞\a?"@TE ЕKbn36ԑř] ih?NxKxohJ4߿hdвe-5=|MBj U@}BkW:uzZc8r%SBD P@7״kܚ@@ (2 4&@Ԙ[# @rh@ j̏@@ @9l4  5  C6EF@ck@ȡP"# @c@5 P(F@@1@r(@F  ИPc~l 9 aQd@hL1?F@ (2 4&@Ԙ[# @rh@ j̏@@ @9l4  5  C6EF@ck@ȡP"# @c@5 P_L(@ #D P ,ZȔZ9S^,<e/@GGǬYЃ>hSL1/;ֶr,۲<v 7{{aoy릱@mq9&t PWHEeq޼yo?K,K-ܒKo*8%+ImJy( tﮋ3=ˮ@@{pV"[ UhU'  +ϝ9@%vlTuӪ @XN>'8,M[+3Tf=\Ս tȧz guY#_*{.HB@_}\{лKBeWuRI  pFuKu%! @{g{ 7Du MuUI  Pbxy_=HNB@ |znʀ %Xp@p} :,Y@@@M^VUpeAB@ l[9\:. @@m^{='|z@AW-UuD6$@(8S_?QzT%9  L`gc=Q @ $p%O(Pꭊ.H  P{=yٟP5R Md$+ \A7G}Ռ@aYɌ 98~]r\f]Q@@ kz_~$uI2 Lv$@ȡ@U#"D6 @1iu @;>ozS]*! @r"{xz=,.CR@2#dt-%C@[}|?|_\-UN2%! @ð 7kE2$kBy@"/ _kgnXْ@@ c՘Le^.Fj+>-,@Ч;!+WwF녞o?iR2 [^m69ijH;G ! Јaq?GIVP-&j3 4 [*7W^3o ȸP@@;}ntϦ :V@ |E~!zd$Vu԰4& cYfgzZ  ѫ M= ToZڨM6R-I  P6NtOSvHmn;% @jTDװ dC^ YԖ$@A@ ^# V( jlQ\6)UX(hkɁ8HEm֫M$E"*ICSM@W#>tQh1tt}.]q'8! @ ݼT@T  *tR9@ Sa ZK@@ N(Ny  PhB7/C@88! @ ݼT@T  *tR9@ Sa ZK@@ N(Ny  PhB7/C@88! @ ݼT@T  *tR9@ Sa ZK@@ N(Ny8{o/~etޗq⠎7O>زyEs/?gvfzfǛgǞȀ@ " и%|,7f/vTeLeCh}ncfjN_A%$ "@T. 3 G>l!f:yx3(06Ҭ_w_j6fgvgf NAO?e>z `]nu\w_ff~:)XG?S>F}8w.? Fʖ@w;׬? ^[sȹrހ?"pYfcv:sgٚam:yQXH+4}坠hэͦM5ү(zi_7[}܃0] 歺iL4$K(Z]AW6>.BeL!c7Eෝt姒V~ٮ>p_q.W^ !k!6p7^<0`2=XJ;sb1m]eRz`@K5ͬo`fG.+Ryp~!fsWz=Dߊ9E(Z}rGۮ-^mc (vQ.:@@ ~FMm}~.W@XS`WnOu*Z{#oN1HN:;x6Wy}vݓ#SY(ZSO?.g?i7 &LV PCnAs$lgjujD3^)З *r% 7[}d9ʃᇝkٗ7HF@2 M ]jSq>ץS(CK(PCa#xI+IA7IT+#@+O[S(OB@%@T  @t@(P障 # }@J'@T&  @D@@ ɩ0 @@t@kr* @@(P障 # }@J'@T&  @D@@ ɩ0 @@t@kr* @@(P障 # }@J'@T&  h1!"p JTŠ0 @-bo&k1mDw?]Oۘ1cl}vzj|ɷ˩6zҬm,>V+K^ܒ%Kk_ϏV}.̔W2(x}YsVnֹsZ`m< g,ߦ I[">sWp̚9sR`{=G}>9;E9cӹs:ZvH @.yD"ZbX }Nh5h, !5o0ZWDH}Nƀh4Vh !XK~<WB!P>1@cA^г +u{l9)Zׇg_}Nc i XBBr!r`35նk!{"9 #uX1dZ8/]>e.:P׿@>9+ԘdN`сKW{h)ZעՏdO }Ncƌh]51Fc ȄH/ŃRwg/bַuN(KӘCcHkFz&! |̏ jɒ?x#]l}Ncƒh5h!!|ӏstpzJD=hޭ=:G+@)ѺkDBZ&2IWz )Z2ԙ:+P>1EcKX1$*՟葳AJH&P>1& IH @"5Oʖe?m}.kZ8"ל6?oF]=J^>W4, y3ކ%:]yӂW dj8x3cOcQ?s@!g>ތz6b =@nO,^ Pdq ]9mץL!|\#}ƨ2iEFz 'hۚ_>WݬK@Gj. Ӭ5}6}YQ/yu$@mW=G9>oϤxWZEy-wU5E4i#!@tO2A~gRQd 1LcYMc^g reX+| =T] jW}m"и}~CeziZcBD`ctajI Qͨ_3>&@S}6qQCI Pp~y ^fW/j?H}4E5&jl$!@xNϽ>9*EѤ54i̋ZjlXq <L_ Ysyu<}QOMkԘIB lsD׽#s^5M<涱4fj$!@Nr9:h>iTkFY)@k~j,Ԙة19=3(_^uiԸimKB^\b[pߕ9Uc, Z(ox}޶=lU=MkV4f=5j4Vu5!!P_|Ew٦艪;<Lj\Eqyچ@oskv;5Ɲs5FhI Р=k7_Z^YO'y|7K<ֺͱ/^s鷼Pq^Ycte*  M\9nzXos3=χz&5_`2ݴ%!Ш}Ql1Uck|4kL^[c8 z!c&|BU^ߌYgR2n+moKBQ\^c蹯Xcrt5@NM|ŞNyS}5<`%!Ш}Qn1v^p$Cv_ډ^vۯ}j&aZ@s͒l~4j̍;ޜò!W3D[SR:$%@kdsWcp֘NBr}wOI7N_O{ŏVi-#!l\E۟\qcr<I PE,_wU*cQNFZFBf6?eI Ѝ>__w4KN 1e'hi<[]c< bnyq'N-YLFEH$%@KJjX=k֗'@ "9$o{Q}2h;h sI־_k5FZ5֓@k=}홠RH~+m|O QC{5k,X1.pN1qLJ_^J5 sI n56kX]1@6r|Cz>ʳ$eK`k/Ne*[E4ea5Vka 5(M^ʛ~/|<Lʶ^G]DJW0\~Tcrc~k-x>*I8LBUVI785kט_$J'k/'6.ws{2ǒj]ˬ1_c^@JAmjkkyD tttׇ'Tm5V*\c~ImMKJv'@NI  $%~6EW͋͝6EW&:M(j^ 6w&6MLB  -=VSn@ -}i\>hf_~M3v7jfny]ȿ@}~8zf'mav)f ^Ͽ)5.fUf#>wDEoU//K/f <)x}?ṩz TK[#FxP}0kfrT?2KHl Ͷٳ240;aY[[Z"7iܽW}m>7mUxAҵfi u~IKf˚>>N/^g#E{d;?t>Ǚ DBQZ< W'k6ȯN* .1t@-46lp~}f\fH@ PKX-[fv!fPe~F~H9/M0;fd ]y6?w4;|ۛ/3?avot*km'mѸ5W&ܒEf>W*|?UI)%tُmgHWW7sb3=*@*@}.yÏas3:5feIB_ɺ7}lοͶ?l7_ﷱ n >l{џoiB q63ӫ:b )9ߟ/;N0ϫ UgyS_427Sw g5r_y*k?lSͶuj?wf{ 4|oT<W}f=^]O&?*Ъ>ozl]ɼش2+Ъ>'>>^a~*S@+%f>Cam;.O^g/~RU)DzsbY~ a>?K*׃0XW$HԏiF[}}Ϟ܆#gaVZ%VN|ZJ: <`\xW7[c7/_8^ .#!Vg@'24fzTV9a 5(3}%öǽ5mU=~?27¬_wXJV~f/ZuxP1[k!Vn$pPp[/3!!P@+ܜ9¯h|sz$CIf[G?FTQ)V99K菲(?i;^g/E(rv\5;|s3:{THF}NKO>f)-AU}n@RHhUNN:k3>uYS ɬM><.+EFAЀgͭW^~?`R$йhU{9ߟkarKUw/Ъ>^O'Vϛ|D)55 ZVg?Wzi`jKvl@m}Tu헛A*==dMoy){s~_GiCMO\=!Yh Dۻ}N+͸'T ujU}N$6Vf<iR1siV>okoo}ԩt_닺z9/s5뿢?=<([e.|s:3< %o#FIVRnVsO_>|\p)ïgRx^嵾}෾ԋ/jT}XqVŵxyCZ2.R<@@ @M@d  /|E@&5]  @^@ @Dv  W{QZ@hP K(_Ei@@ @M@d  /|E@&5]  @^@ @Dv  W{QZ@hP K(_Ei@@ @M@d  /|E@&5]  @^@ @Dv -n YL#}.Q^v#@AaVDy{s@@ }:::fE.{Qۦ*B_,j^ 6 Pk(&{%kD }. UYM>WMeI%>@@`YUSIENDB`concread-0.4.6/static/cow_2.png000064400000000000000000000573611046102023000144330ustar 00000000000000PNG  IHDRGtsRGBeXIfMM*V^(ifHHܠGfe pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATxE.˒sbÄ3'Μg8sd. >\,g>Stn%܀ ''5 @H(i U g~7LV'D[|TJUƽ)vnUF/?X,Rzvn,{nB.:4i\(G_=U&\SemZi;J ~[+Ï:c"}rŠҴ U  !@ A@:(hzȨDڜ'/y|r.iMDgum:}]mDLiUco'vN"7u\xa꜏٨a׉t2yȇ׎KHh [X.?LZ!5SS&=iî*],m*o+s9w̔Zω9x[ҾrcqS,mZ:׽XQ2hҳKcB9&'5r@pE%Q,.r!""r{"ENGN^sF\@du6tloWGHQ.hO3D&j 🦣Sz"r[^],LO;GΓz?BfԐRX۝ʕΒgQ2쮱Qߦ/MӉv7'QC.DvS8O@`mp̈́@ -tmA:u-kOꜯXNN'1uΉcH"蔒SY#~8##pQcζ6懥ҼiSH\iݢ ֑RQ)r:䮳mq9CK#U95:0'1?,K5)CyL<|L@4WJp:uCc^"t{_3Wf, ]'#Nk3_ܞy!%zM6c^tX)"&6nqLlW디|Mj괧iU $ 0$(AJ3Rcjy.^>W:5Gtj@D;i^Ú=EuDmU=]BYX?g%'n:?%Jo]^lqWJ8 w"2CqxFm޵ɧO:G{[Cݾ/:K86m){m}),6Z~m"iFd/nk89&BɒKQobٴ3˺ˉWl^]|\Z#Ϝt@G6@=fD-찮Hr&EnK+EV)_u4Yl3"Ձ Ժ}"{hzKÜi?w_9B"Aԃɓo.tKv%﫟`ݱĶ3uM eZ˽:=ZX4k0_vشDEBVc6o{jtlHM"$Ý$(A $1D?G=]hp5ng3އ{ӜmW. _vF:lb>}$.%;#_)_h!!'PX8fץ}\l91 miis]¦ yd<΢~tߤܪ>&k)%w,vYYm[ >NmhzMw d xKAi$0dȐ!}SRR:AvdgFI4o+Z?wwp[iUdġ5J~otti,%82Ѷ~UrtUG/^۳vF 7Frj 1F1i#`;t]:;N]1m/A@Ĵ6)ҷR`R[dYZ}WL|@p\C @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y @ p_G @ pC\y@ ָ_~eY`A Jy@ PZcs=S:@#P1{0'0dۀ ZKqꅪM%*))Ys9b5@@f0 HXdI(kRVzj g[ewFK.4HR@0ĖPVPP#H_|<2m4YgudРAҬY 1nY-[&O<̘1Cz!'p4mlZ]g8C4[U%c=Fm)}S$P^^.#G~M:vi&T2= -3'UD#TL R<#h_ov6~j? fGP:CNHexζA9Uu~ L0!l[p{  j;h* EVNo'[ {Z__.RwjsU$3d]dE D8qjc0-_ڪM.]Td;ގbgK -:vg m0"!͈F5IT }enŐbB E5/: 6]!bav}Qj6[ ?GQ׆ˑM @Pu7;.V Vn+מ: b;x횪6w ƅ؆B-޺ClE @ vӴ_sov9H :SJ{s<+T^@؋lkFLeF Zs8Lfƛ3*MmԬi:s4M*@\/^woFhv^G&Ħ|{n{DƤ8{kX@"J@5YfqIl ý.{f{f䐀M#Mg;@ LyĿ?Ui$f`\OC>|U۪" @$0Tmr?;cڙIf16>VT=]*FQE @"JښYfŧ}xHm@ Dj׭#ns]8*xovꡩ&D|@ p8BwӔ8b.>5H @@dܢxI{&.=d@ JƑњ11~ @xOxGmS*0ވ7!&43&q 4@:6 P>zG pf⶿!y@ Ppom7\>ZefƑ 8eI3#@ PozK [i9oT]::TgBo@"H` )], IG_UgsaӸ"$]5q[3!iC :}SU$nzR}/jxU/#8wq@& ܯ{۝M&3$m@R#pF~Y%۪e6W4BUs@& ħmΎcU" @;)B @ t vW%;{TuiYld@MܿgS27iC #`__z_/. ]6)Ra7B@2JhM܌H %pm7ſqj}dgldAf+ә> @  /4`|)TmבY6%jCŸz9w'/ &PJ? =!@z߾p_P;|uel_=9Mj|]v4XXXtO$DχT}L @lJk]gUVwk9׭;txWsۗKT{  HwLeqz]bDO5εՃ{ @ Zl l@ 5 5n)ja{9QxíE%:?  ]IuE6wskFI K 6'v]5o3x%[]ip^do@ @@Ojx,4. KMŸW՜Z}slq52vn@ ]Єf{o+Sҁ  P4|T5ޖuAfT w 4sTxh<[dVoJ&~L8*hh<@愖&EƳ $s W#n^?\q42umf@7S ` 4LJuZNi!mcTs-5Q9[!h7o ɔg!@Dt5%Zc?W4O#DبEŪA{[㪉/@C IY5^Kf |! n C(5VZY=ju\ vBNvA_ýC -4T@ qDg$TZ%U|җv4{}U ۸m_[!H'52U[_ qZ3c҂ D@U"^m\mLv~@mmhPvåq7~۽VVI`CM,-*Ng@" (?G3Ѧxw߯xxHפ;S҃ Dj Ԯcm,\l+M3q=qm6voFha=0!Ԣk-6% MZ fMXkD K[Vv8]ն8k:,jvn@ h_۞s@ P Ԯ?[:kFTeKtnN@>aWo@ UoDeJSF8  5YZkyDvcs],od:>VEhY 66l_6mFs/6$ 6dVoO"]3P>f+S >Z`{qE~ASsa4@6 4̆ @xfȉZDt2 A1&zD p9Q1yB )|Dt/((xXG:%zưa_V-n}DFKwj+S&]ot<@@ z0cQf@ڡHלng}vef.9ڟoKej|Nv6tz:D_TGEQzii,ꕺ#69ۓu_o7$PӦz|?};bÏę↧rUlM. =#Aʕ 4ԒOOl-_m:e\PDd@Lب$fQms[,+u"߿%rX$5@rŦ;|21o]=ͣ}wc*(X!J>rP;y,Au˿BJNK;7ZGah#]T~4@; D@.zv;Yh߆EsDQWd"w""]U]l4㺸qArZأ'Lx~m/h]xyR2Q~2YhyWշv/G.<yDMDK l{h5]k&_}MY?[TEv^iڤP/%**^vc#ԶҜxsm/__Fn[#unK8ZZ&6{ kOj$4R1!cFJ'#}HMGL"}$wϱLFu͉[q[G>)Qǁq?̹59gJGwoi|wuN-Ȋ$]iO1j>yBK.=<$ҴEj { dSJ 3J"G(bSI}n'8IaN4=z]1:=YHi"ε"$ m*m݁kњ5S %1ʉN=Fbr@;2C:֭,2<V6g{:&|iha: 6nR2}32(K:\2Zd_E:m{uǙ۽tfz{ü969v9@r%8~&~ ^8Ij"p9WdlgE"^E8,/qmҜb/3^b}oKo lי*O&#rxi$,qO mΦ[=$H@x#yNC[Tnh9[:懥rϨҸ Zu>ҜxBn1W>TJMn"N1gv;.o~^&-n{skywKeJirqdMeK"G@  2h44}GKdаiRe ubzwl޷ ڿYP.1Szka,}Cs-r~miqA̩Wί%+WUɑW!/T*{mBZ>R9i2cN_B ΃JD@Oϋ9/޼t cU|Dnב鏾[f%ědUrqhXoȝ#a8-s Wpd۷^(ǯ.=[!M'\(M#"KV-AWT eZqΩi'Kv'07oZ 'WZh$u{Ai:u,@^biתP96uBYU^%}Sә%ҵC!UpGf  y屳^]mLƅQnz6I:NzwU:RNIMs|!@yFjvҚsmSӭXvpFr-R%G}FRTTױkLǮ Q$QUl x۶eeTTTϿXNy[D{Mud.xlͭ"_ŏEJ~tнz7Dl 96ĴG"ޙ Q9@d pGj1  PMC˔Y[f_`^"p ƶ霝zp"rv%y29OdnױMr?|`شʋ,7>/ %?LZ餸\C $ @@|.uToQ zː< <ڥ}\laE_4]hy@]`X:#-ܧ/%SyםI6oF p1 PԨ@.<\pLi=ޓB9Co߾4|„ ꫯpl#st4jsA+oM4I^y=SjD&ITۜ1c?&M.=z$reRTT$쳏l&qbUK\`;>yw9FװN?BX;N!} :6!^8Q75<ťx 6g_lHN]J}58wabUTi]4Yg{) xZ;C~Cj_@ڜ}/]|Nj<jx|E;{j^ߨҰX 5Wfհ,_@ lk.x5ci#U㵿5~u@ DjQ/h$R%6gs~(I+5yUvmlګ"%pfX_'S4l @MrTۨQ.QhsyϽjEeS5gU6D5,洨/b%~T @ oآT}] BLoSM!!6g{ 7*;hmXըo{v}ul@ Wg;EvH0!o^l{lމ <'iw۞]3>'UB@}[(d?nT`)RDYo֏MN#j@ӰG=Em b<S!hw e֪QQ ;K mٵ'֯El9 K;Z{{EP XdөuZ^y׫y-h2M m'~W~^)&C r!Xa‚ 5xȇ6g~[~ %rUA퍰߰o3  !\-}Zh/T펲 ~o][c@4>jXU:1|m5i-۾V f$+~ DD?_?H|ms֯X!2)a}3 iC.Ӌxizވyp11Y򽪟G/eBD%[զ?m%fЯ~ζh$3$H@NViع W;@bd6qL@ =hs59.%vKdxDxl"D>B&ɡ1' P5}5Igk_RE z#'9CCb q dm.1Tg֯œ]S*T ۺ䅪Nc8ϩ)_+w?2 ;I@*6/cRI@t hX/Udm~Vk %@WoYYfw6 ho*ho{v}j*k 4:Dmjb$>?@ci_~^v=#AkV@C&wk{V Ig!pHcIY-[V3'$@Kڭ{ncjl];8n5}k@,Vu;8[G 2s̊!#5 (̼֯G[OI">5SO! 2S7_K elMs$BTFb($#kHn$ @-SQUR)c "@Cֿ W3kD M5'Uݶ7:&  h1mYRFCW t3c @~e~@ Z H-#4RU=ܲ]s[Prt6‡c( BZon_" j̛T/gqtJDGdm.}~Ԯߴ J6C5ޗOZKꦟmR  @k8Z?h][i(@K`s Yag}O<TR|\~S@RᣵTH _lMelq )/m.BίtjTG["y~^?n:J <'`U?BcןvSEG`;},dYN7vX_zֿZ?@ [6ӌm[d+C@AԶGl=H" #>2Mh /|ls{* 6@U)eIDATdo[i?}~DbJlznw""I/ m`ꩪHz\x&sn /|ksr ?CVPiXg4bx]í}B"}ܦq#ԃxGKU?4l{_\.FFl:ѵŵԗ@ WdQ>][\ %/m Yѽo讀-~Wc~@Cؚ_T>[U@ IOk<,2,I DMyX\ %mnB~~:7GCyg}4MzSmWugjV ͔!vW#@hmJ@l| 8E m^sguEq3o߭AK.#ݶUÜCk.m~m$$i$ 4mA .Qns+D#oM/9!]@Ӑz˘s:IԿښ7|[f@b-s[g;~?j@z{M mn[oD Y_$n:OH3c; @ Xu䶂V##pfNtZ+T@_& _Ynw%>jo}39DjArѯQ0lmے_Gf@ܹ Iz>ܤRWZLYqG PgUJz>ǔPŻ'Qy^, @DH& *SHWg*_mG *U].H5CC lu}hݷ)(O !Kr C )Qis.zcT-ֿZ?kmS\[@*l?>n{JhSQrlZd?8y~L`&H U윺~=SQhs<Sq>sjՠ8V>^G P_6DѪ{mvE joV>Unif2M mΜ_Uks {jTP XlO6$:hJ&p۪T9 MRE W}HTwW ȎYM`;=ug[!40! ,z꺪awW~oI&v{8D`W-5՝TY.?j~x)^mU{Zaϫ mکFM~s>ގ"lՏTyqz@ HXU z6H\5E -alsuUzjS|߭XX/ (jsUyC`Z:Du8ȦE lmns o {@]zx2$j]w,1lذ7r]tOK7GKGRIm.yVL\z8J6WdaCC&aɚ6n9le4 i|9 0b2] @H/$5@  ] @H/$5@  ] @H/$5@  ] @H/$5@  ] @H/$5@  ] @H/$5@  ] @A#ty.Z.wҨ@ w͔]Ϛv—CV'NC/T#X(ۜ4Q]TlD*F&@ mUU"i~RrQeۍ +#URYYgEj2 @Hգg^7}bFee\z,)n\sS'qq3M!@@x3W7Cz -''Uҫk\pL9`V2x4yʊXܣCڶ\t\5O\z|G99_!@JටϕN䔃 YGc&So.SfC^,t. No(wƲNX!w,wv %R))  @@hLJ|9frߺ}+ϖ!:Kdε.:z2wr嗩+diՉ'1_84mJ.wu럕w઄A pe\wby5[X^T~UAmv#Ͽ~F|v< @ lw˓Z2waۥ<|yRSIvg~LB{DN:,i(ttĦ4)tlJU)B]٣y HN6- Ln=lҧI2dۈw6_js=%=esթ7F/ |Ȧ{lWSɏl94GkEN3[5E7HzmHFңs5ζlVXڍD,7E+]GdNS#\3@ jȩDRۤ":EĎ}Aן r0a把E\ K;[Y{\E@ڜt5G/- tR2/lr p~uu@_>П́ntu3D";"~TgG9q탎@ hs6qmsF`2: {\N}DƙSsӁ"W-ҼzdرRZdf_@܂@~=gS6id,毯pꤔ5æ lEl9Fbo|u"8ݜ}XVkkR#^Y^\myyG`c~h'W_wp k6sy,X^]tS-Q7Ǥ}O]túKCN;`Y'2'l9dGCᮦ\9h;5=~/9ZXmEk$rtwIt P^/w>[W{^ bsuIz6Jeq$H㦺`8x/=7S$?ټ'ɏhsVvM˝3I(js~tBX/p1/j,ewuĝz׼goLi5p\jpjs:8ѭ_K '6~RXPc~X**~[.›7i.(})ڢ y\lR)YT!n"Nm&o)/?t|2iټPߧwt{uf2,^R!7=>W>n,_Y);m\.<ߣI|X  @ z>~ :M&M_)G&~4l-\vTbټoS[\ιc)Y84ҥǿ]{1g?v{'1}v ?k @@ ۶eeTTTϿXNy[D{TJȏ:{/"݌ߺ5tI&bSH̹&=:EΔG_]k`'S@ -^Z%g2CL[o4=Sn[dy]ή$~]&>Ci m:In"MK"yRYäN9NΒL!@BC*3jAbrZ&ti_$n)?0[}xN,- 1 2 2 5 ώ@IDATx|e'!PBhRTxvzV,wyYγczT8`Ap Pz'Hf찻iN{}3y} !!#xLsϴit6z FN6&#tIW#* 7&U*;IU)*@ p ~\NcαNcMK_;5D@Ha;UGhX'郓:jͩ  @K  ps   G8R4  ܜ  Q;  7  q #.E#  @9  @K  ps   G8R4  ܜ  Q;  7  q #.E#  @9  @K  ps   G8R4  ܜ  Q;  7  q #.E#  @9  @K  ps /OJfʏs6D] @}: xrK:bq[ ^n  P +eQ M+OU$OCF?B&O['#-[dcWKpMlߧwlG9z!|zbZ`R٣\uF'i2SN|ʚ<0fi!&hp7;{Eh@6-rO=8IW3J}g@+λi@@hwS@@(QpX  @S*  D "@@*@TAG@@ w!  T =  Q@@ p7U@@"@E  4Ul  @(8,B@@Md{@@pGa  M n #  E; @@hwS@@(QpX  @S*  D "@@*@TAG@@ w!  T =  Q@@ p7U@Mk&EEEQcjD'f z|δi_*@!B)@:h n|wm 7M>x{ G+qT@BŚgk`[eR JKK*%RI+~:]`Csߟ~ݷn32~x_@ " ,%>WWW/Oɔ  wCXH>=J49\XӼk&SYfm+nʕb7&Ccd u@@RYV~fP&h&G`-k^|vE  ݥyfogfR>]=e  8\ aF[lx2# %%ᕈk@@ j܁[HRߕLfN7_3  @cՍlԑ%ÎݘB&&֕L;~:&%S  n~$@@@K-{{NLB@@؃lhFw=&R  @zIqwNf"@@ 9k^l=%n~$K6gfwNW3i   \Z4kvbdB*ͤE  @ hnмF;rOk*R8 <9uGE  @ h4p'~˶L@6? @Hu'F7v){k&tq윰I  ^{SX{_ uٕ5G`;}9-E@ tOwe'ޠ֜  +8bNp~Q*k   lIl   XAEux  9 }Ff >1bI GxXۙ{]&!        1N˰'@@_ 좭z&3f/qR7sI]S* 4PJN'!H"ݙsÒL  ZmK5;jO$!Htgg358$@@ l?i^ߏe$)̮l$ @4X2hC|Q   Cfw0~X l!!\t4i:ݲ*~@@z kT4%m5H9?׿&CŨ  Չ;5o-Wq4H6Z!jVI +p6}fw~.1}yhy t:zoL)UD@ ~Flw:t8U[᜻륾h@@t@[tfM-vm{{ F@8Fkd,|֚IZ՚A{a@@@ Wٮu[њ7j~Z3 @@( u}5Z'!  @iv_tg7 [`K]wfE+p6pGz h R`-eTU;hh;?7 @(PKh^j&!'i×hv&"?]h7 DiGߜR,j _j(@hgj>T|7t6B uvת,lj ZD[fA:FZDNLB ] _9g$a F@@wmܧkH]i;] ͑wl$]w HpOw4ڏ hj?7Di\]rႄ2掚I P+V>9&ܼ鵛@R] Sf}o#SKalg= fD8[h~~@a  ]ZY= x.:6 eoL"@dmu4;HE.%  *hE.#QE 7S91UJ=H2WQ>μT@(`W"Ŷ?I7NR_ٺ@i'CkV4[# @s ;G J5g5xAb{- @RPFM{i 6$bt*6-# @ߨ=Ǻ%+4;!d $]vvjW[%]kMD}6m{ ZϺH˯N6RM"B2J= ) p1R0]E}~ *"7q(vTiOJ ALÖڕ1bQi-Wj`Zv_sk^cޯ]٘ ?𽗜G1^rVZ@VJמʧvr=USC͍ 7v5nfi@ 𽗜>^rVZwk@\U< j:65G Riڛ<*ld$; >O,B vEۮl;c.ހڦmI  E; #0r[#V`l $ٓ$m81smufP`q1,*0vn4II|l/i,@@ 1("@@@ wf#   X(R  #0@@XpB2@@ @  B;  @0F@@ ܱP @@"pGa6   "e   A; @@w,)@ r%snMI;iz gfj9]uSPo+4,I6bFeTj" Z 0\Wv;iV6}?_߫dVMPE့ufMm+gъr1$+$}6ڈWOfm,쵼:'(- [zʾB|o3Hb$0UC@@ިMk<=oipi餩?,J=e>CәU6<dWꐅVnm8êj% &'?e>*Gz5LE߰m"H?noV6k7 D_ i۟Ul{S, dm :SH=^FMlL?[('.**zaW!n&WP TO SѰc/eh;٠P]#L@m|Q"XH9@+[:b3!$CUmw j{"9\Y\Ƽ*ʬepQ^HG Vzpdޗ%2iڵgי5Ȗ'Cnzl\@J;nİ.C2-tڶ1mđpZu8ByGmp 7caF pA xoG,U.>}NE{GE/rzYN'R\U D;Cdzw^f뷉Ȅ\MuE"' :sW"-'|1^dtY<[\?2I$[|eȷo)+om9h%"cN^{'{Ȝ~Z*g-g#-'!a,0* dn)U똰+SV wr7x]"G_lz5E^Wϭmzezh)!"\h cMW/ Wʷǟ},`Pry9[[d"$"[]V tNߠW$@@ܾ\_YelYL[WZO+>۲ 4+i:cb+`冤\в,ȶ4&ٓ*-@$2TP(3~ -2~HՇKkJ*厱+ROܾO+9؎r~_ޝR,Jvag\|zi&S<9e2t" E:2Or3A ZNq6Xݳ`R9󖅲xewL#Kp(@B2 rǖңK\(wI߭tw%r^?ϓ{ǯ4pv. >gCsesŮܫ{^QC]&A]7_{w\wVg^W}{hi A eU2m9rm[2{{kmr2\Bm[P]T\%39h'm3[:ϔ%CL)) %ҭS8WmyE\둥] ,^Yx׻kʶӪel;|?jۦ{^Lq^g١>@pA dȐ+O/+N$vãϝ?4@87$oB,75ٰCq7(G ÝR" w1  R)u,  @ p  )%@R" w1  R)u,  @ p  )%@R" w1  R)u,  @ p  )%R9{/E@Hc Cp$@@ wIuuŒwG}$2h ׯ_UuY7 ㎓m٦Y۱E9ѾL"'OVZɉ'(={Fz -?L8Qv!zL ^)xRvXr \ee[KqqY&$iڵk Z;6. @ @イ20an:YjU}BXP/>.))իW|ݖー,D t׶}پyV]I\y*`]ҬJghVq}UK@馋5{輁/6[ziڊbU:o_yqLwۼUc @Xp_uR2i)R[}fwM`Pb~,쵶h$@q/C~GsGɞuOR?h>6fM?9G3)o4{uű!'Cd_6}T[: L@jfk3#6fՁN`mQ?<֟ݥR~Rݩ+$Fݍ]Q~_yk&%FFMfq$ Hu75{f&~ ݋ ݞ@i/i~OشuLJqd%Uao @ZiOi~[s5b'@@ ;j?k~GT鼑SۜVw.McN Hief ͩmJvPoZ i~(yGn7)cdmBl4e_jktޠfQlwmWlK4HE)O*6unm_X4}}|i~LF!u;4Ef h7{BiByl_x .)h 4V74wm_E;@vO6Ö O5G}>@BUN׾Es*olf u15 hWhuޱ>էͤY  -pVB7Amܕ(@@ hi6vs[i  @i\oI%o7[  4F`ndqtޑ)0E?EA@ t~Qa 1NK _{*{JQC"pn]?x@B!9\t~/?7>m~ƸxC <9[2S^:Æt;5߻" \mSt>'u>C`5X= [>]= DNٛht,ڨ/n*^ S@ a=P&!XtwŚ-?U޼jA@[ SW]70)Lj׵%%  @ h6f~R&&mcs@@T S)POo T*" @CwFiٓ׶_x3YTzfݫay^=f>@i+ݵwlOŵڐ@=NW]m35'nε>l[Vf3h>4]`'=yf @ MnvڑR']f珏Qx?nk{ۆ8DWݶc؎Σ+ޑ]p k@=aք{.:N:?@Ako}6$=d]JtnMB %Z{s.nk}Z_I 2:S`.P.y{yeiv2x#MۺݚKF FikxOth;ݯhzb{ts۽B]5;iqֱU{; xE \:${ ѽ˜tè Lr^m]O||.wn%O_4fS^Hi5; .قkm3$+`Y9:ZL_<qJhnhm]O?dW9ܯxuI]iEaUazgk&!@=zX.z6aVm/8ul]-`]ܟuFTφ|7!Q -:O׵HxXwS1uH P࠮ϔ>[n{O@j XWRHn.G %ҏAru؃K`O}GzuH PCJϑ{u/9X ?ޗz[^"+uH |gMrl CW 7ğ{Ι>"Y_ kgF!!@z.܍pvOJp2D_k9 7w]tWW`[mZK}V/ -CJ}Uv-[6P׶!ɼ2DT{??,|F1z Z?JC_{ N sl  /yv!ou^u4lDg]Q( ˕ ڇŻy~Zq[i&%^ 5;s,cj :hLi>P] !ֽl5==Hk{ {}~^'p-5_='!@dL FhY3  a\9gF7U3 xF[CH̕EH9v6 " Ej‡/ռf4L@WPG# 8aE6_3Gݣy_I~H:R=[`i{I=ЦfF~9X$Uq:MB 4AEyD"XVk'o[qhLB m/4[@s$Fnyi?jn[&!@xu?4^=#`ݵ|b>1bĈ2223) 5_0zwfM \ Q\9ޅ9?5JF [û4\{nl7ш+aL~=6~mWJ\*˱l(g` \ݎ ή@@O`GSH=*jp%9^8"0?^x˒r$0tH+!L   [zR  !!L   [zR  !!L   [zR  !!L   [zR  !!L   [zR  !!L   [zR  !!L  $@*).Ljջ>bE@@H߷DH5}Y Pv={z1ղ9eъz  oj_~}r|}\zJ}:,(B6l:WO  fG   X\sMNUUrCK%eF n~.w)@HSg.F,)Α3IrWOsdcE-WIޯ P~WX.Ch+WQi['(yk,[]ܬЬUc  *p+eԿVHYrޱemI\Ry}@ǰɼrdhm\}WWJ-%}}- "_'L_("kI&5OZR@@X\xcvhNr_/ LZ~uWz.N-/d.D V=uVWTVcA \s"NCB@@z)=[>dSco-}R,35>\9vR"0uy WV?qcMI7vD   a[bulٵ<*ybY]IZ.!i2yB9瘎RP,/ԫ떬J 푲EAˆu֪T!΅nXMm8]s.πķs. t [b{W)K}y{\x|G9AppyW8=gqpaҲ;s"9zv5[g7,2sTE~DW䮟=CN;N=hsn"'vDv9FErRhsn+"?$tH."w׊d=mض0C`~95f/(v+䃩%]KYrΛJMfms䩛BLrn&зU}:pH Slj""^.R?{ĺhsD?QV"{*{o߽vHFFsx 49"s@cWnY1G§jG4U~6rWV^%oZ }щZo}~_ >RyU?^;}Q 7K54]>ěwl#J2&CbUN _괷5оBd٬˙@Csν^ϞYv4ۮ!{dt9"}DF9gW-𿦫J VIDATȔHfgt tL "^]n2J9ă:EKWڿ.\L6kLNT\LkXpϟ}\`ߞ"\ 37WiO\Eﶵȉ# )"wʡzu{DNvUdioE~n%9W1؍`=`۴juzя">b=@dXz׺}zZWt);Oö@+7If"g{'%Jiu[:`~CU`ta7!;9ReT# 7yG0߽%bW -0Xy%F_?>CdC5֫7EE, zoaE,[?|At7Qp{Pyӂӝg8;_%]~,2e<`gC iZfe B-ZdH.>ڵ~Jp'="O)r>+p[W"&쪁nۈ<~.r""Un oxDsY"Z^_ϝB1_Ds&k=}~H{t82@ /SJ&rY0ٳD0}uD~g[H4D9ι*w oǗ ِn 49׹Ƞݚ>{^Gn_ox_!#zH .n"]|AJ 7nS!V ܊"hWYh  H#sdmS V{cknn@Cy-#D6hڋ5&@9bs_E<ٺY\B];@ Nq*-#6ب }'Z':uGkh@"Ϲ&n6:t BݜKfBB@]QCvT T}Zyv{Zw/]EW(^ڵɔ3ϓN kUϬIߕʆUNm3:=[kge&~@@o}_*CF-m N^(%6kn'0_&|Vˑ!G %{Q:2Or3AjM˫k=v+3oYx|`f'p73G@=VWRz< n2=uZ;=\n+݉5|9yrrN:bvr%.ħ+/+v5}^de̈r692ܻ庳Mp;GW@@F l(i6ȑl[AQ޾6ks 9W8C2TbWelI[̖3euIuG6S+oJyIat%Upgz wsɳ_@@'WVZһJZfvmwjdS˃eڲy!kk0e*~I'C }2Ԉ:  @vmih_m^=[qҮRϜqv -T䊯1,>Oߤ$ L?aeG  7]|UTVֆ/wbYYD(Y7ENY24@w{_wJ r.Yr՘%E՛eY)  /˼rᝋ/Kaar۶9v\MïK䒻` ~1=c[P 問;_/i96tZ,Uc  "p1W?&xz^2rxM蚟%n%?Ln~ly`rhytFf'g\6rƏ)?LypY`;7>βC_}A$$8T@Hurrinx๓*/iNڭd=}m^zӥ09d##ҿwyR֗Uɔ6$C]@@,` I۴MM6`PM-*Ӈ;  @wS  T;  @wS  T;  @wS  T;  @wS  T;  @wS  T;  @wS  T [٣F sq0saP;}ړ wA   i\9__9ڀ@lc 1 2 2 5 ώ@IDATx\e?wIQ&X ,X6P`AEx,!^A("^ VQ.-td-3ٝvN?=CHs @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0pnD;bhD5 @ @ @}"p~sQ#w?#_lZ p'Z{x]݇;hcLiǿ5F @ @B`8Ȼ+|Z1ȫ#yNnCqL T붊}#?4RhǨU5ڹ71,5-O",tM><:|}5 @ @ @0F8Qyƣ"3<m<ƻ˧zct.FX7E##WD~ @ @ @@]m1ȖGǮyHjyw;#y|\V޹ejb"+YmyN[-z$ϝ'R6s#ό㵲UM1}%rzz{j̜1r{ϑG-N93Wh-+_$uiEӏcEygDm,nk]?V}%1sq{wF @ @^`;] ߇Ł[#7D.?mHΎdH~H^m8i&e1fslێvK;w>GsW#ٲ9,)ӞlYBJ5Ǟ_+^$s#wG.R$3ytax{㵝N?3}=<" "yM׍d29ƻֱɈ-%,=1iw|F @ @W .Y?p;+VϗGPfa"4,??6F% YvH'H#Y| ,:6Elo>;r^Sf]$U=&rъ./^jq^Yq%Ub"c.c9sc_(ɂWmx:^4 @ @ 0Ytɻ:*y'D~ۑ"5rq]UY.#_WECgL嶹E޲t߳RDتBж|H>,Erz$;Kr}Ni,Vt;ir,jq~kS]rjZǡk%9GNrN @ @6zk͊="_|,V^%ïkH~ПE,6mïKwz7۹9li,3ǔ>Ɇ"TՂkn[[9e}aczNoyUX,EΒz"Z-aڹ֭,RL5<;Ng @ @Cv.aEG;AwE{E^Yё,d3V}Zں+,D۵}4&>Gnea& Qر%=2<%%VnjS1?R)3^?oV~^Voպk4޵SM?=&ïk>"7/B @ @N !mذhYwtd%IY"DDd%H^zDS>$^qޑoErO,I;";DOGEUckH>- m`)+GYޑF6ƻ#cXDz#Yx @ @ @ d!5Rl,Fn,!yddȢHޕH~w")H.DwEn\6/2^96mz[7fry>3/ȢQnw\ۏb&87}zo$H,z";GF>ʑ##dE/05Hީ~屲Ȓŗ;W~֎}c~iڲ1}m}}[ @ @ @oΏVt>bCF3)RD~[C$]n:^96olw6<~4.Ų#'>_g1rI$eɢS~gMoȿ#, &?䣶Ʊ2$] :vN?fvDnq2l֎SnHWys:&rV FxMfYD @ @f:1bz`$_Ŏ{W3ï׼&=;V-"֭٘U`mY0yx䑑WuNa_Ǻ< @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @}+0Է=I3gγl:'F_x}͛i-]!@ @ @>XO=iSNG @ @"2 ØIwf  @ @ @`b.s @ @ @`̙!0ܭeM/; @ @ @ e/ @ @ @ 2W @ @ @)PtK @ @ @ ( U4 @ @ @`] @ @ 0.p @ @rS @ @z)fl*#6/\Y @.NI'v @ 0-V^&ȯ#WE~B]E @ @`v(1<1o#7wrp>kFɢKeW$[]mL90lDkIgj[D>.ZY_cegƂ,t|"wEl+UBLoՂ*2'-7v%ST,}~F}*}$6!~i4?){1|$Y\Y9H.ErΑwGeA,|(#1׍΍ ~W$Gnh @ ( 5 @ 0@U#_c"yx-ȋ#έHK"]5/<%E~K31w9!w,rm$^+yS$ 6U˻Kb^x;9",tIK .yo\tE#k 9%?v9'z%xs+t; @ @ @,8Fs81. @ @ @S&^Hr$f8u|} @ @ @)\^pYMRO8Wit.%@ @ @L1qEzI<{~/9]L9 @ @ (ۏp,\.Gض;R?A#`(wͬ1: @ @ @ N>;){"7D1%SK䣾~tvҦ#v[bK=2^<1F @LP`Μ9:*a>Xx}͛>. @`@YN^.2QzZ|W䚜n"kGHtN$ .ٲ @  .+GfQ-uuYW  0Nޏyw#j8\Y>6?ٓ bDϏG֭-IEY @ Щ;\:]vzN@{~?1{ٲRosb_=>&y\@2K @ @+.N7iGh`qg\10HD1bc}_v>wE4 @ @ s> Gq"0w!ӥ+ApK_]iv.+GO\~y`z8υ#c\buW9 @ @ @n (tKrKKLydja_9!id֦ @ @ 0S]fꕟ;-VzWdr=gŚ#"uWt:яd  @ @EoN֋ G>~l~wlkSlFѥb @ @VL@efޝ*@4PwűyyX:8M  @ @ @`& (ī>1?]mkȦ;?>6~lwyX @ @ @LPtW~bn|#/,i&c#o팧Cٌ @ @fLt6WDgўY<Α;sgCYM @ @3]@eױniclӫUYl98{$0Hg9 @ @XN@e9 F.ec#~'~tOt@e  @ @ @sEf#(nDtlS|}maE @ @ @cEf?gH<)>Y}#]#j&  @ @ @]&L7v¡ߧG>b @ @ @3EμfyGÇx}F~{9=:}Woo_ @ @&}14nLD?7IV;Ex{Nns2u'33R~''@ @ @pˌN @ @tS@ѥE @ @XE{  @ @覀K75 @ @.38 @ @ MEnj: @ @R~\{ @.cXI @ @`/,_ 0]f7^ @ @ @`R]&A  @ @\+JyRNP)xh)?:խSu;j_{G)w޾l]ʇQʾr JomNWʑlMR҃\RR.M)Oɣ[*\uq)Dd-Zۯa)vͩâXsJyN`N)EN9ӏini\ J @ @g<ť\R>RRRf^[i_(w'G9| 2WAmgr!˥lR΋hv=ǽdd_^8]1o8ͮQy͚\V@ѥo/ @ @%ȸ% ..:usqbU%?=QySP{žK .Qc[=ܦݶF[mR^_e.o;m4(EC @ .U^7ؼYq5rݥlq)?`mrnmc},~8pX?Υ @@ 42( @ @WS9<}KTޅr~u75ɥ7]]땲Ҭ6RcR|jA)1ޞ @@ (Ee @ @NYÒǵ^>*?vb)RmѺ ŸwkwS%jYxztL/"ܸ>?ꊬ 0t^WxιfXUG6M΋|+fcY @ @.]EdFOl !FH<, @ @LSE0 |9rFwN"y6v\c> /7%@ @ @i(2e8O"o}?¦ i1,Z1܈gF @ @*wYhS#8퉑M5_jcDf;mƎa4G=)5,fQ" @ @L3E i?Yq1':6vi,ĢT{GOFVT91{AjW @ @/8Џi\1c;FUpC-榜>yLl9`"W6.cW,΍{? @ @ ЁKXlXVlI,Yd<=3rVd-8@6Έ ˻[m͘f~]z @ @z 0=8@1|Ժ#971ݑh_A'N3mw'O;FO|q~˻#'Gi9ƩYTɣ~0oN4ڇc~ȿ's86NZa:2.N~Ys6o-"@ @ @. ( bgr7v/Y4>qq&Z,t9.T6%@ @ @]}ca^fb.6G?MUR'ng6306H~gP{^F:߆# @ @#2qwŮoc?dmvb1rm@?}7{.9EG @ @ 0asfu c>8Wr\hK"-Sb'F1f% @ ̝H}4te?[}+rC[[ۈ @ EK6aՔ-˱[ꪲ[=ܳ*S֟vO|7.5\Sj{W\LzzD/ 05Z-6?]zvƦ}ylf좑|FZ+ꥑ'x|& @ @ ~a'\`A j{% ..rO~5k=m:# 0Ph0:ձ"ywt͢"r:i LW ˯#R]v^ @ @<+7}2.zoIuLHmG«Mɱ[|Kb'Fc^QKNk ٚ1ENiW @wc#A1O^#@'QtŇ#;{prZ__{^9.rCzVWIJ" @Wͧ>ާa7{^_ @+Frqޟcz^߿ 1־@C-rtH>,5 @ Яͧ/c=0:R=_?ُg 0ⷑwG3a @ENc3dW>u='Ж@~mmi# @ 0}Q "ӧ{I>*q|G#T`h26gΜg t2Ӌc/^ȾdY̘ΕA @,8Asӏ_'wcˍef  @` oOZKe(3iXfֹ03fose{ @ @$ <(}@PpanNGc_6 l..Nxzq&{9&۩~^稏im< @OV㲘>6?oAZF1ڼIZt% @ @xit!77l~K>B^3&@%0ݙ<]W5w/#3f ؃ @D`'}q̟X6(fx@ ԟ<.xxFtFCW @ @ߎ_m;bA=_A=&m,3KD@e\ @ @S`Xy'_m7ǎmXf4@ @ 0gb/5 kBGj&  @`(L  @ @ <6zlPg/m n|cYbE)NO @ 0:6#Vo'I3`S1kgVmIbE)NO @ 0@>Blt]jv{%@^ (R۹ @ @:>6~}c7͔b5;/7i,3KS$2ENK @ 0@Wc a͛$@)Ok  @ @xm=v%)oo,iWĀc~2 0.S @ @c k?1jcLL 粕 @@]z t @ @ |$X˜~[m~OE o͛$@)Ptt$@ @UqU?fYptaNoXfz(Cl"@ @S`Vd ն:?͛\*p`L^tӇM @@]z t @ @ #XƘlKx9FcY葀K @ @`Lc[|5Ok,3b.*U @@]z @ @ |zL#@() @ @xF,yQcb2 %;2 0 . @ @c k?712]^u$j -EnI: @ @@ m(WmnCq]bōef  @.]u8 @ @%oObX#c+,8OGi~N:j+L\`w|Ϲ|̘u.` @ȱN"FV[2quH'c$/~#y=FgDzD#@p ٕ @ @`Dak׋"Y=He]HO4֘>2/r^$ .zVD`R./tE:7xzq^b<8^ t~A{t.` @L;bZ։鼣ڲ<3rlc ?L8*@IDAT!7Ȫꎘb Љd]Cs};ȶyf1+fֹ= @ @#Wq⎎lNnuxj1m5 @ @nd1ed156yyJ~gNƺnF @ @L#S/}_?+}4 ueL~Ke=iB @ @Ax]GO1/ߟS7ngiw] @ @ @6v>onsy]x^Hxrv!@ @ @"8x7ןdFCPFoٽ.8 @ @ "}__[yENhQ֊5ߊԽǛ~G @ @l}F^Ӟ̓ Ű;u73 @ @L?vi#_Gψ]r@+ @ @գQf>Ϡuso#c^1h6 @ @ ЏNXZ6@}WKѮS._/ @ @ @S(8H_˟;rEwDF^O]~sK @ @ @^ |.NXM/;\m iT;I#@ @ @Hq^]^ɢ٪}=&@ @ @!g #/` gƎ1EvN @ @X`8*SqsI#@: @ @)0gΜg q'4z,^Ⱦ4VW=Ʈds]r@3P`8fC&@ @t,1YOv"X^'1x\6#@` E1p"@ @TAk5uz qMwzL^fLH @ @LI? @ @P`gHs?RO9F`F eF_~'@ @ @%-I!@ @ @fˌO @ @tK@ѥ[C @ @hE}  @ @薀K$ @ @.3< @ @ -EnI: @ @ 0]f7x @ @ @[.ݒt @ @ @`F (o @ @ @@]%8 @ @ @Ptї  @ @ @n (tKq @ @ @-2/ @ @ @Pt閤 @ @ @3Z@eF_~'@ @ @%-I!@ @ @fˌO @ @tK@ѥ[C @ @hE}  @ @薀K$ @ @.3< @ 0ezQ)ۦOQש   @ @` +Nx N)'o/ʿrǭԶ%0,QzS)O|eKcK㩥|ld:.PtY @ @ uG)}t)WDNŖ ;ڱ}ƫJyˉ<[)>3\ lIx; @ @`N=wnW>21<>$ڙǕK9bR^A)7ć_Qg*eKyeN~r,oP~]ʇv)Ur!޹+Jys Ͻ[}kJ5[d|}Õ-RR߲MA{]v~)k;>oeO#@@]zT @ @e_WZʦF⍥|])G]Olmvc7~,lE)+Zag}?,J9~Y(YR>㟧UsR~{Rkoiyz)zb);:9*c~&rh)?'q{ͩYQ㣭dߴ%0﹝ϕ24hMoe S @ @Fcz@kݺC}rR>R6~@)UwW6F5q'% Kyp!)/H)VqJvYwćOݷ.QɻnN|)xtk:)QZVʅOA{meCxO}˗]g,2O @U`xWR-)aL;)mΗxjk.1|n]NJ[`ʻi1ŝc2k61/ZZp=)Q`sDAs{r?({Z#@@]zT @ @eֽ_<^,*m՚撥˷uZ> .|Q&^|XٵKd z]q=wם|#?dpRq)}J^'='@o_o/ @ @}/pa(kt-Rܫ5}Z|LmgjKW_[bfZsicv\`sFţ.1iF=+2 @ @.?`^}B^_Z /:#Fc_/*cqŐmzTgi}_+;q2S%03o;>[ʽVkf4NjY @ @`س,yدw_Ӻeqk'嵥\uq)ߚZvi|u˞RNB)ƣ]Olnv=`-A{V2责N߮{c WvpU)Tʆ[ \*dav5`)̝2#3psy-s|f&Y @ @ @` d:ikoTJjYhhj57qf"=/WJ? 3F @ @ @ (i @ @ @Ptkt @ @ @@]z4 @ @ @` ( 5: @ @ @G.=v @ @ @`] @ @ #EA;  @ @ 0.}} @ @葀K @ @lEFG @ @H@ѥGNC @ @ `__#@ @ @z$#h!@ @ @[@e @ @ @=Pt @ @ @-2 @ @ @ (i @ @ @Ptkt @ @ @@]z4 @ @ @` ( 5: @ @ @G.=v @ @ @`f @ 09s29vTxύc1;] @ @ @ҿN  @ @ ,^ҡMG;~rgUW]/,n:ꦣ'׿~K>l6=9d$d={ny=w}{ee]v)nCܨ6  @ @,;wqG7YN9唲pru-VYp*L׾ӯ&ymٶ߶=oI'TnrחN8?.-Z{s  @ @P`8#9}T;v t*﹡Եf_Oeu:` @ @ @@gψGXӭ*N6#=:G[,{ČrM @ @wHbzp9E+r,hGs]iya,{Y;  @ @'zlHٜ?>{k}W_sfsY @ @V@ `;Y/\zf{}~y[9.g";#G4S]}zu @ZY.Țcn {n-ÂXFΏ-5rw3Fn p5؜}xF8T.:1Nc5cW"7EE֍]>n"V @LWX[9-[yN>eZs?~=:38ne/ˡL1wWop|Z1ȫ#DvLv˿d-P;0R*0Hf*^Wh V2DВdK"?<6 ҭ6Ne^8&w_ɖ,XsxHF\94Hg|DFj;\ޑV~<-3}=&lYpw~Yi׏a^f?VeYkS]rjZǡiU%#ӵޮ;ҼH#@ @=xp)?דt8h>ƜL f'#1#١F =XpumCdW"ze7埊(u U\j//ܢU#=j8H~oӥoo);(`_uOxˑ87tՄ 0=woSK*qJo|qs=K*N+^cYP6C޸NQ(   B`]Gc@ç+Pb)d[ιokq W] ॷ2Z ~RJPW7pto]5^ӊuUMVvPPE􊏿(4]5^'{B_5)QZ*^6U&(>GfV הtDo$ Ը`>*$RM/8yފ,#|Nl ]A׏gɱG@@@ 7{ߨ6Y\o-:[h@p{Wt[Ɵ @M\樥C;ZUfvWEP߱طϷQ J+x4T7lleMtHfC@@@ + +glCTb)d[9@=lmx    dR1D _Ổ&JE?{r?9s5"    @}|X, Of{}Ű6ߧ4S(     @^6Z) 4ļlQ4lT[ʖ\#Y%JcP@@@@@ Q-'j9ݒXYة8,fιGo?MC ;MѸT@@@@@ 3J~[P{-[1dVs.gUjYg     Pw5J߬!RfTc4K=|BA@@@@2*6A _qftM[i%{UHs K_%{^Fjj/ & @߾}SYe%:rgCJt-@@@Nܡ4 O>@$4exlPK_uo&44 ~(@MlR[{w     4Ҩp˛KE( )_8T974A(] @f.߱#  (5%ˤo] (%o3B;P+(-BD< @@@@ `.Trr(4A@TfkGR.Q=/p tI?Woͥ T_˒) _vN6OW6A* ~6jI}vWVS@ [xmJfʧ "*/HnY>f'{ 01 Y;]|X= .%s(JJ@bh UW 7c6UhpJeEhi?huah< ]*@$å$+;@I$#;@Giq:]g@r^;Dhg:5U Mcr(@KY   $BS-[jat4@r>TH LTP( tV/'ȏ3@Ndm-JЖCL.pZEjkĭ #Ds. P'[G->#Ƃz=/G(1 PFb*@@@XFYnV,3;\fęƨ7d%+p9;% bPf    v}eM{Kr":5Wuy)+9XfbR -ί^ RZ/v Pw]nǜ    PW|Z{[-2[>?%诓:u_aAO]ι4jljr{^R¥FxLeD@ }]7c@@@H,&__U(uÏ:UPձ; x#_dF!-&|6ђCAsu|7h1)B[WCoҰ7ubR=E@@@x.1SJS6?][^fʗ#鯷/Z'*NN2#9%ǣ^=/#Cm甾7x#xPNt@n4͍@@@@ ?ФFmNM>i fF``Inj sS뵻4`h)_yS] k4mx$ .[Q@@@ ܬ*MT[A6 a o dp?>#:ϧP*$K&c*Ӱ}.+i #@4nEM@@@H, (.Tת[y&ԩ|h K:^@}8+LIqȱ)֥ht 0   uh_ʘSSXD*uRX U L!:),*):_SPunUQdHSF4   q.UNYvdpьɢZW H0I+9XUtHq5>SK5%^@@@4liC LD$ L05Ѵ1h;Ҝߪ&y@ R   [$ͥ+^梨8N]PBs+j1O+\fAlht)Cώ#   Poh {wckSW*Jn뷘Uy㗏 -ιlɦyr^9XyLZUţRL=]Fr?@@@&f0YVJ'⏳ZPr/0_|4f(dKs.[/twix̾q药ʵ:Y%#.    Pj8^oPVVQo*^є E֠ z"Eι,qw+w*SxF7q PSQA@@@@=CѰIh%6竪M~ ˶pwiqd({ .˚0@@@jhI7VM oAC”X&\q_#ͪ7x*(;(EFZ`   qػol[()`qn6[]sw*T|u{) 1*#@KF!  @߾}SwPr/x0`xӋy\a=ι<.UsR>o|w}A}   C|'_7I^jp1+R[9WG0ϹVR??F@@ȓ_ӪYmR=6_IgQL.cSU'U,cSpdMQFR<NJ-E@@@@@.|p4@@@u,̸@g|@ι!ι9w˯xfΝ.qd)      PR'@OjRC@@@@(Wt)#~#     @Fht(g,rb&foɓ3e1zh{l̙LˈLw]<     -@KqmԷo^umԩw} dÆ {Ν[~k!̌     Pӥl}0`蒸V©-4+e5h"[A.P?kUg0@@@@@\ pKg}hW# .UH72CC;UÛ1     @Nht s٭dmI߅we|RQ3?      蒮SET~1Ùꝯ'[N X      蒪R*F޳3ù3W^BD@@@@@ 4dd~>Zj ~UOez-pNBߥFE@@@@@ SZ.)m{w((WO`@1I Zޢj9uUwt@@@@@Ș.,k?RnUTRbu&vDeY7ܦx7.~`C|E@@@@@ 4dHA>9tӂ2A=W9.:|;bFޱ#)     dTFrzh/&q~иl<ZIs;]BH "    _ /%@* K*i}RXZs.nιϹ@+w2;s9so/S_ \ް[N1v,$@KJLTHH1\G1olx7$*'ѼLC@@rԟ 1^lv͉<;6=-ۑxL͗@ιϘ]ٯߚYk5={D)]|sß4 Y眗.3h?_KW _\d{K2|z/YF%gXZk,M;KYN.'mca-+ѥF#  @W}v/]iOESFƖG/Ǚ(H7_l~{=@*s_cbW:46هzw- SYu rVK;3kTW9 E -tI+{L1'*(lRG]0@@,04;5뺈I\۱szfn/~'wElc6l`\9 d[?'}s@D8}ҳTopRKRkg碅fFJy d|z}Awر#.iqux ,%`O|weQh;;jCA@@/?8Ӭm'O93w#ü6χm}ٞEszoDnbnf+tvevJ ;nt0*:ܾk @.9o`g7U[}jhɬy@.ι@q]E-WE m/6YΰQh_հ20=\(O򄢏--_//@@ fN],wEpP?˶0lu?l^},X/^:iv f?}<7Gn8Zxњ|E@ιFM̶?a(U _sC+s4:zcy9wۛ^Ո]Y@.Ϲ1hOGPc~R%E(Yli(mm& ;F_@@@>_&oD%43p9LXI_Q -=0Z/PHG 眿kG}Jg[8w35^CZ)^Gj"P.0@c A6K~΢-?kwPnTNUb4HA@@ 0L]'ٳ2@gM''?@]r}MovQf߾gSs_R>>\vU jFo}1Lw"\>弧:&1{r>G?LƬ푟Z|E.58E`P}}@@@ z]9ϘuZ{i lu"]1_On6tryM%˷57[ӝ[ۨR~:qw;\v2zitV)\sްx=&[K? htI*9n.TjgF)JBA@@&՟d⟖_߿>v'Ϛ]X=xŮf]]=}S@.ϹΌ6\sx 벵S :Z,o6~ٛlt ~^#(!sBp_cNsj<(&ChtI&}ݣNE*RYY9Q3`Ofdi   PnP]0f 癭U/i\ܯf̛zؕѕŝaR:f)ݣ^4뾅#Edc_:bnc&ezd6ٷTٯ@ι6zGUO[U}z<( }4'%QNkJU5ݩ.D,,hOQ)AA@(PѠE]@IDAT|SXw]`1jt(Z+~cvxi?xN:zI1 zې*J@ |sq^0PRQz}8\?lfv옛y)=i(9?m>o PWι1_]ru5Qu꺕WJ:JɌ}ɛ.yOm_Z;YYsB5P .wc;o;M_W3&qC2DELG;tS(    dU?LAȟ@Фj~黕iűS"[E 7?6;b^=>Q|ٱŷV52>xm (E߰//}#}fWQU{}muE]e<b "%v^|d?z|_7W)-~lRmT*+y^     @ 4( f{@h}NJW/<QY>I輪t/Kޏ,oCi:|Ӕv1㼷:43 YA XE@@@H |6MEJmm-F7p\otxA op$޸ +^/U&웪g$񳦽]_wTA#. +'^*'T]UӼ%ĮQm+eBږ[:]UOUԩQS@@@@PF" )͕ʛ7DU"ߑ_I,JRqc0AIAd]w/)qo=ʍ[lwK%kU%?w)+~:G2j-AP Q(    E(@K46N`I-P5u"7"ܯxLQ>/W¼/uʍUJ{o<8WxcyJmi픫xeFz!oJP1c!l߷wU|wIUT(^dn}:*sQ,l{ЍW/ -)*^JeBA@@@(B] Pt&bSc}B ޕHN_oW  eF/Ff?JBJh;=AGxZ}%< -oՋ3kFZ+~Gx%xoyVRZ)_Fe5e&w+ŧ'S:~gUeKFJà.    @qRljDGb ĎP7]xdC66caR8MPPF8l   @Ї e{?&~l|,s.9܈ˬ'KK.P\/ӥ[    Q Js.WҬ' $Js.WҬht)˪?@o     b7Νk:͛gJ2={-Z( &     @A pK>_EE*/aÆíAv믟1bKe瞶馛fdo|&f\fxY24h=Zhara?@@@@@/.7tϬY~~? 0zkْ%K"lo<:|]p60_" .=c6tЙ .̺Y6X&     &6'22egp%l&-Xxk\d@@@@@ȁ Zh%|~KvQ2;Qb ָuKVC@@@@@ K[w~UYAd '.eel[MV똩q*(     @:j* >|Pt +/e۬>TYqW(I@@@@@jB&) s4f2Ȓ_D.̚Jn%.~IiP@@@@@ TQnW9l[&A(eifG˛dxd l|ORF(7)UWwBA@@@@@BJ^3({-뉷8Km%qqa@@@@@GvHTVtJ{;|ykY_m<-(J!uJp\bWk|xBA@@@@@./Rve˷_ ?K9XVƜ: <\ j9P:\ٚLg4     )Y['~a|[N%|[[/b{Yox;>O9B4wS3t珬      P4iKc/fJSдދVE@+ ֛u)?._z{\?m\%v;F@@@@@lj^L[q+|glGץYwq[Lq-      +H[xq3Uw S @@@@@ 4|$]۴=lO,+Pf *k_v7     @*]N4m:\c~G]ͮ&:MyW}w     PᑞZGU,Kk/V;Jc30j㍋SlG7.W@M"o+ln̆     @FRAƇ(')B(m~C( ~J]yO@@@@@'4T$z#ƳJ6f</OI.P fki7|$:Gi߫BA@@@@@ gϔbu;[ӞPUZ*^vfUtD w{Q@@@@@șYZSb_iSwL IG e{.Y      Y5x_ܣ4Q\PR(}W琟8<      YZkMYin=*%uR3[^~7@@@@@7}ݣNe BeeiM݉J 榖YY z7_fZobO%80@@@@HE;]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@R%%      IhtId@@@@@@ ]RQ      DF$@LF@@@@@RhJL]L-lY3`@@@@@RC     XV]*++'q.'1bR u9s     @!dѥO\ Pd:0KO6f1                                                                                                                                                                                                                                                                                                                                                                @*r&V  @߾}SYwޛ^YY9Q3`!^X=1,1)c[qB>DT[R_~vUrL|8.4>t!  &@KTܢ6\Tkq1-DK|pLǣʁS9 PWU}J1-!U9Fs>@@@Vc։e6P, e; G!lCߎB((cS(jIG!lCMGlG~FtYRGm@@@@@@hT{   y8}eyXkVy\nY pLR W Ha_ JJ+ {.W\J*'RXnjA@@@@@"ѥH  #vWe}f/e;~51*pL $?cuO+f`YlSlXZ /b͟uuq1Xү4 @@@ @UVV&V6Nx`빤rcRG}D(_\عcf/t:{_jU>HgRKKVD@@@@@jѥV&   OQ^7rm[yΏS?[M_:̺*R)s~]:{_Mm[Vk޹/~\O3'ر/h:̶~p={e\;|ё߻v}lH9~ᘔQg@EyŏԦ#;yS^0o"{1v_"^9KK{'ӽ?{[kOn0†_-diYٰ+o,:~b_m㾎-?]:_,W(VN?k3ai>\ixݬqkrEJf4|Nilܯx\96gu`F b(@@@ of|Z7mc'<*'9>:}5e֫ʸw &ۥoku-Z߭c˕֑α/`bs"[x5`;֡EG[uH[:_pL툳 $''X6l3Vz7Lt}R7M̶uOdz_>_#Rgy-VmaX&7ڄ;XVn\#2_;33fY9؊F__ene?eR\l|oD6*Se+U%Y߫~7x9I;c|`2T6dpKAY   +ْź>Yfvzqe5^SXy[}oJ-\dgnzp7lں{oCxZhW'O4D*8& J=k;-ݫQ~5fYK\|Jm az?Wx^4{m55iHK"_^xf _.ŧlD6{W:&v4./*o*1{er~V%F @@(__L|V?N-;K+_AElfZcX֫YMY"nmdO}Ȏ~~?[vij{[0tcRF( ?'E_Ѳ_կ.-:^oޡques~.UiZ3kи].^vyr[yǶSl_ك?mT*/_j:}4XئfտE6jZw<tPU՛\F@@R`f+F{5:ivjUZedvVt7KKײ}qfOna_ץƌe<1)Ϯ#@ )syTzL-}߽] kTw,YXi/mja{ǙW;i {KuT D6,Y_~r0YK(o[yInjdV  +JGv~׏,EXd=1恥é;Xb˓c]-cvg}Җo\c{pglvژ_V%ǤN%z̔9+2Er:+!ޡǦ.ݡ%*ۇ,NgōTf>Rs>ŋs챵?Mf[VݞZڮfhJߡh'՟`p*r ݣotƖvS:J~nUQ@@@$Y-m/n z+;WgS>Ik:Wne}_?k7t١ynos%o/gv/|j?1.1_:FϬIN[bhnGͶYcm"kwxhT(/[jSPtL2XSz.ʽJsee2@;c+(e%=\ B@@)>=2+fN+1RiC=([||CZPUr}vf~˚6jjM6}Z6ne=|b+3l5ܰ󝑥vM5lCǽ`~JrL Os~쁽U[wY8YڼEs |Ͱ؞q۲󶑆㗾ϵ3,َJ=n??_D) a[x>b?u?b{V9[;mmyM-~e^Fqܢq ^tiZ7mcSV]=U;0C-˻%1[m)6plD6kn{g}it,G>?wJZdy,>XM4UUnR2}-ʢmYٕ]mkKQ6JoҵͬIFvB_lu m{ֲYrDz {9dt/B[FXӶ՗4n`{ vZ?Tܯp JKWMiOP2^u8zW͗Quج*Y*AQb@@@!E¿|2{5mޜ0nymҡuPHQFnp SQQY?'%p2:N(2^d1S ėYܿ?"g ;stwn~a ٺڶ}o-_]gi媞6\_nUHYeRRǤ.u升_]&FW8]־E{OU֑j];}6eycW=֞`|t}G/JPq%XZܫ8%RdUws_\5<29ȕ/"%3kޡ֤u'&@l̝*ݟ_]tO-oXO琸u-~12I8x.U))}򧒙sD|DlE_=@W|o}l%pW-kh2N2Y]}W)ʹG)-UoE߽vV*?KK4 0?qَQ@euSi$~~ĔO"  d^eeSJGzx#S~9(wĖrӖSKKZ*%|LRH37m̮H˼E죟7?t3/YU';i3# .>rf 7QxWiUQNBstOUh1yo[TҤDE{Fl؍{WM$޿I4Xrh0$ƘXQ\4@(łR{T1gQx;e\TX|Mq3VKם%9?=/&wį2QZ~lGФTec_sT(PY9Uqo;ԣcX_W* ݫ_-ɯ߭yѿ)h" o]*sz   @a.ZK5?J料D{ŊE^B>]] 3RGdh@x@ f3ڸkڟc?C9HIt['+c\ߵ7b hvoߺˍJ[)KO'$ ,znя7ا^7sPu/]'$_lq뵱k|ZS& ^CM_Oɵ}M >[/oP\(\M a邈l56=qLK8M5k8\qmcǠh,0Lߔx %ZpBŽhnQnRh9 D9.b   -}{iG:QͭSo)~ׅw4G^o$?FRVפ`Fm5v6S?whgޭ_i49钏ߴrqيfg[}RooۅͲR]qw_{J>('N7eU~0ɒlW ǵ>?^W-~F̺{:l ~.z=MOo*Nj͹T.>]qɣmՏUwa @IDAT r)h\z4>MK陘;*+>Z|}8'OPoʆ[SzMyH2Aq鐖@k.@aQ@@@z~Ȱuֿ}zASܾYբ|5I[K|ǰWP Zpik2{]wnc ;4 'o4iV;<)s^ ˷5q/}!,[:?96L#oM {=AZ&swqSÉN|JŜpä|CCwM[mφ)s^W>i͕ôut|ź&cj;t 􇩷58-۹4FT3 =-lqrsZ^5+F#vPۭCjx,:jr~J'KV()SOWV`K2TqkrʓuR.MXt^6 (uˇ6;~p^.H' ba:JWJE"Y@@@~Sxي[|; |];>UewnTa!!"Opכ wيzLbin'.x? L/t%].M\퇎 {W7rMOp+6yM7MISbGU?xƐ )4RQ' Z-Z&_~p1C皰޽Ð0n-\13nѱkͰɌj:m #Aēup˕>nኗI_pn69= 3?KOx? .-CqJ'v_)]@C@@ڙF&p>}O^ZOқǵj=G=Dg/ I?~F9Y<ᾷ ?{'lƖ/ ׽ԧm{Z#!qπSb٢|{mpMGFnTm5 mxGEMMM8}Z033_wjI]W%Ⴥ†V|s]JK myM|gwi|nFySY8sI3_:uկ_]eKV/ӣPZem86 #- s\ ʚJ[!i*K/‰{dkc4á5+ ]תM|С&mt+Ë?%ju^#tбZ*zլ("n k]aإ~ܶ֯/J{׼oIZ&ZK9HBU(򒲞r3]qrAe]lEQфh   EZпo{nbЛ_._q[&~!v忿ݙOLO}r߇wcROz8h {ua2 :PuGEsR[?7f̪wc~T8 z\ [ڮaޛo*x.f pMZLW{ 6ޛ5~3Yїmfav^EX"[Ibo s_Xc@8~ݿӗ*aX{a ,z7m6o/~ p1[ׯ|3()3,h&Eg+dV~}Q[Jݢ  X-+ ^{ Tq%څd?võv ޽GX{u'L3 zrYxuԧ`6l@ b-/l9_,y0}G_w׊Um{=y[}L VBs2lkuM>?1=OSRz&yJ#{4PVNqʶl4,V  @ ޥz S6ySGcr`xuΤMz\yF*@1ˈGQD`W ! д@޵.a w X-󖆇~#̛yjL__|sgjzdR@eeswZu)CIw/cr,ӻs5P|۲\s,Y$|pVP$jW9]v_ѱ4A5uỵ! @|7jo6Oe%gt_l˖^G.v4jñ&lpc&\9835l=,Q ' ZS=?/}5"%;|M.(>۔ף\yT񺓔m{ʋǕ[;uV F[dG%LǬ٩>MPc92e⏙ſ %+߮l4Վ̍땨-?MOY7=1X[ޡǑ-i+~ +>|⭘(@@J4*!v de<#5)+[E@-z -7[-K|}[ 0@$ .tfJSm0ܡ}</ \ތwf[eⶕrro(y  a׆ioT<.Vmc=H|ˮ˔Lǜc SHewT\|ejQ9OYzZӏPIo8e8^yV 6w+VQ])/W_*!.P|Bŷre7D#92.4^4Z\P Ǐyƛ;eZfNJ .nYeHսQEEnV; -^`hxlF5ZfҜIlS "     P=]|k-pe7RܞVA-Jd~~aEhmKlxs.dzVboeŢBHupAܾwjCV&){۸l^_O&gd$jLB@@ ͪϤg`o[44,5)k#⺬.}5Yu="Sf5)k#⺬.پO:Z$Г,{))Qsۍ(0]Ib缔rV{qI+!s2Q 7\$4}z5gRG@@@@@Fjtw -BMHf*`^ֽf:)~tzDv(+*lE3Ϸq7+ǽ Mo)r4|[z|^jj %Z߷/H[qBgJԚ*X֜IR@@@@@F`{MO_J)~7?Ž_Zzi%In'EǛNvr=nC@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@F50.䒡555puuu#G|Ηs,@]CJU|F @fi M    М@`~ TSW%s_Yΰ\J @ '7/'*B@@@]ZoP-=\9~ E.Eae P~>M    E\X@@@@@hF̮BKzEUyqR)'̬4U!L?   Z.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      P]*rR      Pj.g      PyV   P~PW3&@@h30+ K]v  XKA   >)R   @ å/  P}K_`N @@@@@J'3]Jg͞@@@ \{5%!;C@@?)%0      B.-c5@@@@@@ . F @@@@@S.y8      I.TlZ]]ݬ9/"L0!,^8|_뭷^k6hݹs熧~:t)ᄀgϞg9Y@@@@@(4'TUݚ={t\jk wyg?~js '|rСe\pIs\]N@@@@@T5huvL(1"%^9SnYB ԅV5p*@L*18C?\{NRFׇCA@<-*Py.Q|M*x_)@]ᾱq@@@@@hS.m_;?spf?6?m3     @ PtiSvmtg'… 16ribC'i|4F@@@@@h.mޮvJ}oh?S%>] xcnpl #     Pp.' Egt/$32m{f2m %pLm\9Ey\!     P.%an;qbeu-TtY`lb'#4=WfWF+)]     EH^(A>mrBDͽ[˥TG?n=zCt%*&{BCU}3@@@@h.m^}@e|3qՎG1ãc4B%.0     @QE8m.Xdj.Dm~fkjKrXEhJ @T|c   @J _M R JԆ <Ϸsg9AYZ76eb(. @@@J P[}+K…Sc)ۏiRq5z9nϢqϜbSb,/u~T9rn     д=]k݋%׶UyV4וXw.%ׂ7C4@@@@@)@ѥ{۹rtokfrf siX @@@@@(TU-T,fT)KB7oDeq\<#     E S@p(|%Ͳ(     (`@gה W/ث\K!X@@@@@Pti^UEZf_evzOC˕f6mff#     *.ؕj8/4ᅯseM,WY+#ϚY=w @@@@@,@%KOm@gӱLm☚:&Vc      PtͩږV3T,?S:]?g9leq&#     EedL*~JE: 剃LSb6U%m;ɟwhowok'ƆҰS(-}L+z;dM$6oNz Ӵ[m$疜uG@@@@7 6jtƎ3 \&ʼnnq(YoeNӸML==hs Jn[((_kMdk?S<泾߬`    _4kğw{CH;kOx(/^Ťd^(ċ"[iZ?eO_QsmBK?)*5S kw?+. |Zl@@@@(TU IEzY-=KG7oI+SUַ]ոowK5]7SMJGqQNRQ_(Q.>,P|KJ yJSǐu7T)%TeVo#    XU+S?rgN`rh 2]K_d\lpsƽ\`qs{Owފ{rPf+37E ]ʍ_ܥg;'~n^2T\htRjs4@@@@Z'}[en7?jqߪyk51Y[) ﱽC>V( Ņģv{xxE9PqO?+n9{иۥe 6lmf0L-1dZ6iݵ{ݭ^=^VvRQ2&@@@@(Eқ%\2fv:PWW7K>rȇy SmY *2Kq-\qQ÷sϗ'Z).Һj!A=B63=p^*ܧ5lg&'y|?U9FSQ)4@@@@NK^J)0]87(&)feEptBq7jP-\tqa$uqoK\=n܃$ӦͽznTK\rBC@@@@l(ᥨ.qRO)?bW۫o߷zKq=8URܞV*);+Vdo2Yi::5=am@@@@( P'fKn-TtS|ִZٽcvR㬭<9/ A_#i6 @@@@K]KzMn2bs-X0+,-o1v⟽Bo݊L95q;^顸oŅ YGP)Ƴ2vgm;6ݽ\TS*. R|K6|[G@@@@H]f@ Q1ŷ ˥dHqh:.]X9\RyBT\&t/{~D-\rro%6Z)v\[4G@@@@]˦@ILF2ohث{ķT߈צ3@.DB4]N[[Hc^9ʡsnTS2[D-F    @( #e#0le- @@@@PGƁ!     HK;X*     E6     #.bq      P]pd      Ў(š"     @ Pt)kÑ!     @;Ҏ.     +@ѥ| G     HK;X*     @mGhG+Qs PtkBx ΕSD@@@旀@ R!@\'   @K T=\2sK~>A#  G.q      Pv<ӥ.  '? 6zΔ!   @Rjq     )@ѥ"/+'     Rjq     )@ѥ"/+'     Rjq     )@ѥ"/+'     -   @i?쉽         PzP1BmjY\jN@@ 0&aQA@@ qz[dI6mZ[{.P 燩SE~ ,  @>5u^>,  ^ {~K-[:} 3wߝz3cǎᤓN na_ /RjN;9| 2*ؤb͞=;zaҥk׮裏mY(Y`  @ \w  7(@@Pzß}cǎ]a_/QM2e+'hO6u3އü;Ň[ipqͽ? &|iE7kq.     ArR!) $_MW즊avXWzVd.zJ:e(y2s=[0UB'[>=]2sX6].i<ﶞ.DI5D@@(=]KSU-)ۋݡl}E3G{x8^Qn)s/"ooVcwGbͿoGx^(OPEyC4gro?ȧ{+/(.@@@LC}onMJ{hCtyxּ@{3[MM%c5V$aSt)-Ek @Fר'ZG xXzkdZl 0@@NvO߬w*B#uh W30e{]ǿ}J78 %v@L*"@@@@ P~z.~ t-wJ0sed>"@Tla    (oR6p;ס.PifފﱽB+op!@*>VF@@@mA%\2fҿ lÍR9bfbY)7WNu{~몭2&K|I~&5F@@@x J)Å#Olml2٧c&̲0@@@@(@!.KVoT|J9{S})RO)?bWۋ#    @*%cO @[4(@@@@@BVt;kc,~0_5@@@@@b9<YH9P6~>U|@@@LKѳc0:ج PO4@@@(@m .@ a     dKVf       Pt݊%@@@@@@]0@@@@@]KV,     d蒕      @]rbI@@@@@@ E4@@@@@@r蒻K"     Y(da      EܭXhKZS%M[141JxK_Պ^-@_fqZxZ>+dX뿕az4)iMZ(lg:    @;H E^yaoeđeӟLLF? [exK_:jE_ jBlkMy>l-imhz    ( #@ <>xQd+M|ċ.Mi4eSFsZl[+    @ Pt/N.Pċ.j|rK5]Qf*)W)(.o)-mq[oc2T\hu^h    *@ѥZ<p/MeͅϏ.򀲾I WQzOc?(E\@TNU↋$n+PQUF(!,hRNmZx+o7޼(UVvTl0rsTud,    @ Vs @ ҾP\TGoJ5E]mEzF8z4}&lQ=o25ck.ʍ)Q+SNWE!@@@@ _ @ho).,Rc+iedܢěoAO %7k_eX Z2i~z+~{#1Edvr[׌Ex%fŖm?gOerr4@@@@4@ <]7^cJ\yA9Gqq5IqAgmkJ! $.1a_kmaz{hkz=zf*TecvQRu2^zhmtc_4e    @#z4"a P܍]3ZvW|*/OsxDq/_(Q\(pϙu).z<҆~x?*[).i(v짚K,ЕQyT^qe$p₍J8L;;6ݽ\TS`vP)Ls?    @U PtI#@Dؿ.*^ѫoMEZ+++_*O(.^,WZۢ}9(޿TPI.q∷-6O:n7OqYrhyU EOKOro%6Zv|,=ɾ~G@@@J.Uy9ihϚ,uLvͰ{ė]o$sqtW3k4ՉZrC|4~r2PqOyJuht@Dv~ T=o7>фT[n'Ns-~~Ok-V+Qd@@@@ (TE@fl]p%׶D HS-mnSj<- @@@@&Vt1&w56in f 5Yc     P @@@@@@ VtUI8R죔פS}`VJUi_     @!.+P2,?eҘo                        Pv,V~IENDB`concread-0.4.6/static/cow_arc_2.png000064400000000000000000002251661046102023000152600ustar 00000000000000PNG  IHDRd PsRGBeXIfMM*V^(ifHHd k pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATxmUy?uT^Dk pAcX"Il^^ EA( )}9}73wfrΙg}>=kz(3ڻ5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @`61H]'P @ @ @186s'l B[Mn돱grNqd.^k+[d'Vu˩"19Y-}&|ֽU @ @ @ l8 kUى7t2G&{&*t$1ض[$FbP~tOkko K{%s|vzRZ:Q_GP @ @ @8>?qZQ+6vf|u*!8^os׿gǭeoʶ35:ŶuZhJ]xO% @ @ 0b]w'UUazY'nZDVkU+>vM T⥒=&R+N>kWb3ABiԏI<0Q1 !JrQr48/qI◉G'V|'Q?'jӕzT3uwNS_ƕٌcDS=6mDͩ^wխ]JxoP @ @ @nL% ~N%e*qĻj$2f S$I$:NT9QdOD]ɾ=YX}RK @ @ @X Td]K=lPwJJc*NNT&HdL]AsTyU>TL^̫>O|*QWKeRd]QMQΝvZ•a+qDo&M$^U"wHbWv}r4-H/P}jF|5D镉ӫU*!R%JL \ ]Z}e2]%NO~Ǣ(UVT8^\LyԊ&zomu*2V[=tۻ @ @ @86.!3~%"9Dr⧉g%兩/k[%jxd{R ZyR گd}e>g*{e:y+)P//~>k~Bf},QZxbD;$NIԶ%*!UȩN*/J9Q}LzUjVvJ <}sdž:O/KTSwD78G^>Tj:,g)"@ @ @:}bRD}J%B5hL~^3rh.ެ.7J5];vmkצ:v S}-LU6su:a @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @1),+ViN?\jթ9/  @ @FV#;r_P喌)J>ռ  @ @ @`Y H,>ʘrw@ @ @_@BfM @ @ k4L#r2+O\ @ i> @ kf>&S~[%~r DyxD/$*īJJ( @EBf] @,Vb23_L%}I ʎc!`eMǞ]'zLZץD>i[jew%/ωySx$c;8^8&QRsv%_&薽fcHtvi|$q~D4!-J5R!@FA`oFaH @^''lD%*PG*6iTV#. $>/sبK%%n=Os{H5wNTaO'*S w0QɤtPרOU^鬤$|r>k_OoK3L\7Qc]▉Ay^*I;Q'qAbt4'pVGB!@2٦J @Z}%*PqyD%Wޕ 40|U]X/Q͉JܦK7OTى%jJ%f~xFuJh5DO&jtelD`7;MdL_'KTUN%Q8%QDsk*8$sJP=5Qeeu/ @` H,m @ NkWwJTnUdwXwL:Q JTUdLzL؟@yHbPNMe?'* +zֶ:fEبvEynjDέ'+YS+v>8%1(oN85^Z54HTۉR+ji?1Qnӭ&77J6?!@ @&ѓ|nxu2:eDJtJD\%J*T&.&z&k n#RYLvvqbw;:KR?-K_}`:* D%w^7qDͧV J%ijtL~ujLGj[ͷ'>o%$dƢ5wjfF @L <%Jܳ'r~O*M׬'J앨U2$.HٝJ^q1Qe5vظIb9<&.g&^a[$6K<-1(Uʙ&n|UԕJT$kTs{X62V( @,cNST!SzeW%u*O&*IybHʲjP$n6JćKTwG%6I ʍRGeryގy\6&89D5qĠTbWH`ЙKlMN̷>O|*Q  6\Fs5U @ \~9QDNJɉJ|2Sw^;U裏2S'U2?5ߚz @ @ @Exrw뿥wҾ<_ @ @ @C)MFuf:9ٽ|d @ @ @xoFM\ hO]{ @ @ @BNnB% 8zѽ xM&@ @ @,J%GIt1ǥgst H_f)"@ @ @ &PVUm9Yw%^87Q噉w\Y[O=> V{O/JfS&n]H|7 @ @(p>xŊ&vZS8lVZujC9#0\C$@P Ԋ\*Q_;9!W'?XdL]e2YϷ&JTD%c @ @`=$co[ @`h6ڑذ Ԫ[wC'غ~/zv.;zxM\WU & @1sѽwFNp X!3\cFkYN3r`XrOf & @ @,2 ;gM2LK0zLٯh^_% @ @`V~<)E`Pc#`El2e4$fF\%c]3q( @ @ @`1$dCy1`OW'6t9ߗO4ses8]  @ @ @$dngsMdlӽ,ON[7QLTv#@ @ @ HLocfJ`Cx]sr&IsY3él&@ @ @ HLoc˚y5h(%\wi9(O4ӕL @ @I@BfN\z[dN#oWi/Us;&~1$dM @ @+ !3|dOI03$>2vMߖS"@ @ @* !3c}V~3㾉'.wEU  @ @ @H,XB?%_θt @ @ @?q>S$ߐ$ΜlQ?xϨx  @ @ @`{I QFV%e"^w.uZNg;-E @ @ $d>&P/sص4bcqJ6J( @ @ @`A$duOz>xŊ&vZ ǑV:5q!Ņ 4ɘM @ @J I!z 2uYט0mqϼ6N\4tAP @ @X\ v5G`fN[MFhPVrE$h&?~j7&?ɧB @ @uYg9p-΅*+OG) @ @R@B-P ;ăȰSzK8/qn'*Q-+ҨD AE&?w.z4ZfĦJ%hzOMT?L( @ @,c e|MTCJ̶q`L~"WIu)rP%t*T7].z }6k{L¹dK>`suSU @ @X.2N'84X[2l7KT,dg {d'*H?%Kk  @ @e" !LniJnK%\;7'F:*ZI3U6x^$~P @ @`a<έm(  @`֋8d"@`8==K<%Q QIdW+?K!%vN TM0^  @FM[{z]@Bf> 0_n{=+Tf=wjMʦ*Z]A @ @/ !3 ndxNZ T?|fS @(pNϾ2ǭRlCkOt?_ڥLl{blv6=y#Zo&~ĉG_ځwuǻk;<4;vDf}WWZ#@`.xޟE⁉40eyHz7Qj=t @ @.wr{}k{YkվήI<?' Z{s[voh#d9[^dmO09ֶNRe]&v:yG^I|?[ɞ=ֶ֎Cc+P1P @`qV>W%DȟY&nB @E8.OOan!wohmÍ'𾧶vZ<m#/nmŊV o?>wb%cɃ[ Cv{魝s=i~>{'!-Z;SR &Ed{gSyz"> @ @: &+_*SU+cɘ+֎M撋Dɏl$cjۭ#nwاڳ)[GweO> iWyj%t8X6V,[m @{%xs|w~ȟgw%V% @y̠wL|nskdYvˏ^;h}yѳϿǦ~9qms-20C,-ol%fi#m @ 6ͷ]uZWZr_[lĶ{Tg6]c}f&#[;4yɭmLGNc& !3f7ty3C7MAW+{秉MR!@ @ X[w;hp& w''_6اV;6?if7'=[V~B{Aӯ @Y6D 1|xJAlMz4  @ @`m߹[ܿ#n_OXm?^ڶ7l?q#lkm~.$~> R~ﰉz-wʿ&̷>;50; 0L"LV'ޤLۦI @+[g=+>JnrHReM&>[v6J1y6&{}[QOFw~;8?Xk~^uS 1EX <8kU1ӮDtJ* @ p6o) V {z$|3OɲIZşcY%jeexҬY[y]|~ke3W\Z=mvm 0FT 0J̺zi|+wNu @A`ONWOtϦpdLZn]'>gs} @`$dFLo2(\Md-B @ @ H 1,R({w&8 @ @,_- @`&i{vu @ @Y@Bf],>?ش?V_ @ @%Yt$@< |7gyvk/ @ @ 2KGr{$~;L⚽~M @ @Q@Bf] ,pJ_I]i2q_ @ @Ep2(ă{{J3}Hʞ'|˿ @ @,2%:X/|{we]w}7|35iuݾPfö6۴O:hr|qWFTbPQs!>Vsi'tR;C۱{b-seneOi @ @ +Nn小%;<޾z S4NRWoeՉ5ӥX'QKl9rO @Xr?Yk迣7!C.`pݠe8$ɘ̃/r<$>"_$|Kd>U @ @H@Bf`&n;i#1җF͑xzfDV薿MGw; r쾉;:zn_E @ @BH,YćzdڷOԪe|ޚ?Q薛QI}C//_B<8)G 9cs6Kr3xz⯽ഏN Ov+qJݕ5Z變"HlP @ @ GwzwǤ{&O{& p;ӎܮsq˾|Nr2&QfxhŠ')O{wN^& @ @( !pPKm~M‪;^B6WOL8 rs2ݤzL$f=2IS"r=r[*|n: @ @& !nngEKW]ro"OL8 Y+Q@mD`]3=~GwzgȲZU @ @"l6q̵YWqRKU>8 fYflvlYz?&EU\';zl_@e_WoC @ @ 0W epusd/׺od.qZ.Sje(}.=gZmֺZ Wi$@ @ @`8ezTo1E9OĵQV FۯZkd;H8k=c--&gYzd @&ă֧X-`jޞ7_1rµm[acim.k۶0oE9 -8-S~eo}^& @N`ժU /q4QLV X!b͎ΝʺV>NbV>R}MURYWz^>8wV{i #/qReN* @T SWX0M .h~x;3.ǵ7x8X;+}YgvۭmQ^H-`ߦs;d7L_CS\|)v[]SLշX@:1W[>U3Sg) @h%W&c|;u1ǴJT9餓QG54{\zhd  lLs3fGu0W2]'C~jK&t6e['3 kv~Mջ?A @ @` kv~K0ud{;KҾ1 @`nIczq~z'Q/R h0+3 Yt}gI >3 lӺ_K  @ 4_'? tدoI#1쏂{Bg.UWf%-2&> Z _7} @(pP}'"e/j>B2ˋ%8bxϛ+3 Y#&_?V@(&P+M nk;6 @,nVio_祾u:-K, kf'%1e$F* ƙC%|M5uL_3{t @XSMޣX& ~7DA9''n.'cqU(' 3%ޝg2I;١Cg:v @,s)U_sOޜ.M  0Qwpw1[Ac4?S!@ @`''?5ZSr)I#%b!G{>xŊ&vZ,ƹWZujC9 y=fsel19w=SYڠ- :| @ @`^뼴s䶉_tFOOC>M!+㒌)J*| k.̌bosc{2;qi @ @`1g.THdLM볉US%Q}7NgX 1/wi1ŸB;uϿYktqB{'i @ @!ظpZwP}N&DO;mU1@fC2vƱGU  @ ׻Ҿ7zL=[1u;  @`t6\̡żܼ\kK~1#(pjpɝU  @ oki7.7d"%W;qĪB#$`,C%@ >c3[wJ @~ĵzU/Ʉ_oRwJ^&$C$@ :cDo\/5  @ 0ɞ;a Uoܚ_΄?գ̶]@Bf @`^;ӾWO @%`@IDAT𮜨S^./w\؁&zrB#$ !3B7P  0d?xSKF!@ @| {T;iKo\.ϕ=)4  @`$dF@w˃ҸmC @)uGulcߖ9ۙk>Ub !9FNX-/6  @ X;8d:Ƕzifn3E_oM aA诒ydrѝ @ 0DX7䷽<2=7CޱקIC( !37Ő 0b>`` @ \_a uQGOڹ+n;mURІtE# +!aO @sxjvsg}Io5 7ǧgOL@Bfn @`D>qRDJ @l_;i׷\ڛ| R !1Ed1_6۬I @i^-t^;^<H@,vڪ 0d2CvC #,P-\u @B.z{}˽C80z} 0$2Cr# c plM @L'Pmocץ> ƣ~@O@@Bfn! @`>֙Ouwڪ @N`lxUo㿧}dOsM5;ޤקIK, !7  0fe>驽& @xc:l8/vڪS \ZYTG*/4| @pH }0 e2[`O @zq2?t;ԧ8&[Z2۵קIK( !.M1ԛǖ@4  @J`):@eJΖMS[JK, !7  0kߴ, @}U#.,pNvnK{^&,,\sߟ;s>wڪ @((>z}pvNoZ%SeXb %.O1$honښ @xK60;mչ VuKNQ3S @`$d<Xj{} @,_dMeiӜϳ{ԣקI, !.Ge$םnc;mU @ƙ;zq\79ΡSyw6 @bH,k @` WxlZ0s @]GԟOe);~tOQ@Bf]P^(Y0b@' @xNw)Kd4v@뽳3\DO, t)Oă9ml:Mocz 3ثsǧ~PJ @x /ߚ3U&^Y|x|d̹~GU~~QV)գP# H uZJ~K;  @-'R.>!QɁ}KtK%vϛ@w}!'*S6%@XЄ̪UN]/yc>qŴ[,5-ܯ5n_s#>^RIJ @ ʋ::ZN_UWoxu]ĜrebN6XIV%@XҚGn>2nпQ*74| @ 0OͰ뇬At4a @Ruot(xJb,iuU/JMw/OZK8 @aNF28 @ @`3~?m}NXO{dfo @F[[6&>_<1z @"l~?S퉭rNf/pZɽ_ž @ Ѓo45. @K`=.;ו<TOſ=gp @u3v9 @<(gSvD27ͩOƿϮ7g"@ @`ꇬ7[. @ &𒜹}LzTF 6}-3O%f\dfO @`<itxL, @|$/~0+rڗ%O,0pV @S7usE @ 1Y(n#qSbtN @ wN  @ lvU{~ ~q#K.Dtۿ[  @ -P߻B_  @X4JZ-_?\cFBS \3LtMT#@ @`vٯ% @i94^gM Q>K3$W%@ @`>Jtٿ|^ @X2K6"^@,vZϞl#@ @`tPztn @E^#i{e1CV{><52 @"f^* @ 0( ,0"eÌmiI!@ @` .O ٯ{^w e  @2 ]&7|#&@ @`*sj'} @2zdN#3bJVMSmG @ 2C&d>1zS0b @:wH}N[ut׊'Q`<˩ @\꯭p^(:} @F|+MX24w%jթ9/Ѱe(qO2_s!p5Z @`~KճeT2h$@ 0l1vG&S 79n̜  _  T.gI @`hꗰC3YC`\͸k7 fDoaXb %.Oe.pdo{ښ @ @B`ñI @ ](qi_ @ r^Nӽr/ǯwu'`S3"@( fߡV%@ @ 02cqM#-pdo{ښ @ @F^@Bfo  @`-sgd @ @z2=MXt#{W{qC$@ @m ѾFOqU&D6O* @ @[h -h @ @ HL'S@Bf1] @ @`$d  @` #{}H{^& @ @[g+23:32vڪ @ @FZ@Bfo @`՛͞& @ @[g;#{3h @ @;#'@ ՛=ߩ& @ @h Eh7&@8 2:3SuJ @ @`d$dF8N`Ufެkk @ @I mMnofw5  @ @H6&@ ݛzmM @ @#) !3͠  0?̮Ʃoi @ @I mM 3;3;mU @ @#) !3͠  0?cz  @ @' !3z̈  0#3w @ @P@BftS&@ H 2< @ @ H ?/\bԷU  @ @2&@ %3eo#$@=N?賯]1^vu?#O`|zk}Rk/YhvgL y@t w-wR'$@:ka֚V/ß_ﮔ%ׯJ࣭>Vޭ~@O 0DyrgbbVyHko־qy 0VȌ 4|@L~S @ڋoړ6m;yPsDk'wfk|\k]kw3vh{gk'q~1۷Ƀ\R}Z{ skM\j_?b~ى}.h"1Ohm,R?Z{[W&~{tƗs~xI/QcSK`ڊv^ݧF%Ȳ%.OS T22GUS @( < <+[kekzYyrkek;ܨzy?hOlOw$O S~ᑭ=m2Is7m#[}VT笓%ȾwkORVo}[_W's-'/ek5Qh -}_5ϻ?~mZXB %wiX@OBfB6 @yLXͮڽ䏽z Ӡ<GeR9mIԶ?O|n;G#F]q>Dߜ=ܠ~{t 0 :դ 0?"o$U @.\m i=vȼs&dvrklzj+]YYju{[(K#v<-C.0_sW\;d̛~Y]&3_G`y H,nz>CKfPJߠ @#JoU?#î Je'|;Ing'G~Bk\9$J:Y^y}Osk;Z{Nwvc"ecr#Mc(PO-o ,^R!@ 0|\kQe?hV|2fmDW5yjkgڧ;59z߃ڑkh1yL$u~3>5C'^#[>Nki @`$de  @`VJFBfVTv"@Ivi&jg'FyL%cK}-۾(hUmdXÏeepݸwkxzk~ľn6nΝkC[saZvO['@87ӛښ @ @Z@Bfo @@/  @ @eXf{I @ @`$dL ԓ9m h @ @ _f 12X-pAgnS~J @ @`$dt~۩Wu^[ @ @ H 10 2k @ @Z 5F={d$dz@ @ @+ !3 @`M ,[G @ @`$d!OX! @ @0 H 16 xdYWC @ @`$dFv,2;M* @ @phGf` @`M}b'> @,^_s 0VȌ3z,7{dv[nK @ 0VȌ}3j,WJܻ3];uU @ *jժSWXt~оoM6٤=l;4ӝbQ_}sW^a{Xc= y7 y:L_sK5^wT;wpVSiPAv *k[]u-{[W]޻]ׂ6IJkò"vTQ@Hμg :sK;3INNN&\frn\f6x]9ì@pLA@DÀLE@@ lTa.\ ΝO^3_4f5kVMx3[1cSVgιxF{gܜ9s̙3?^|Etz0q>u @@8J=ųLXW@@(}Ih{ͿU7p66C g(sU/`[+c=   68Oa,(V9N]H  [` ʷgÚC+>Q9E(oz  @]˴%8ua @@\Z+~uUlsj&R DN]yH=r@@@;5"Kp廣mZ  rվG~/}L>)E_ j\WۖW4U@@UM_3+:  & >jw\fiKYF Dk^ߡݖ_TtR@b(D}Z_%H+Q8Us2-'+@@!'mk jٽc`YYQ>C5_m !L, ]^qb%ES+/ŎQ6Z/Yu3I܌3Zb?\˛( :2+z#~%!nJg`љ*b   6-]tRD- DFvM4؏ߔwt-Fr 4˰c+þq ?Hqb=Ίb'"bnmVxI᥯5L9-JoV sOa0vLQ詘(T*}E1Lab 1)lpɎ H%p2OSL2g$@@ %Fݯk!2NR []uj_&(BM$@A_hqiL6"%Beۇ,{^au\/j}#.Bʰ>3r4kRgXG ;7b@@F$?U kMY6 ,`%sξ~"[~YYAB I]a(ֽi*dwH. {}y- x+b_Eq=WawfxJ]5bO݆lmBN׼7ūm*u,vŸ GtV(dۛ}{)vՎ=6~LNV.۱וm~nlY/peQ϶a5  w {(n 䳈!`g*Sty-ŭ,"9 C ROb26p`P]*~Pt60PT -[VX dK ~KC2֍SE={ܗ-Gai/%6pǚZRX dZ߭ pKD1Iqbnl+RʴZ"/vbV vLsqjrَIjCIP@lU>O*`=  Y }孩C -}9'*}ku Fh0~R؅lT>xvd\S]A /=)Ʌ 4A k̳_PX݅{>ь XZ]}xZ%NN.>P؀}sKьPSs}xi?Xt2G$meK;RkD7 )cf;7vn~-T@\W!,>?Sr2  k%l'Z)⒂}KGxzٓfF*eBts Id!v ʰdLWolwSX Da9SA-B ؇? oɌ5<\ ˻Pf+((lh ̼vGd1dwrIa4ڱ lĎ7 )c$o@ƾIt/q33, RacX'd"  ]uTdRq @Ϲ"G[~MaI @VE^>`Q]"y.h~,kw؅xo}drZz 1 sb@p@O/˷EawX]r[,LCn@f;2vV8Md29r{2Wh\bY @*LY4|0y   ߦ/mgΊ8`_G.J8 t`_ =ۆp(@ PxD(2o@C۝/1A6haS`dwKKv?_(`{ x ebb,TPxNq:m gt2+%Wdk ` .5MncC+)rIَu.uˬ3G=)B@@ @;(l`Ɵ V0|B6;"Pخ ޣCat !la\Dƿ4 7qO(n-- O # 0UXzYay7)P\D,"[EyV?#%3l@ɶ=#MWam\amRaDlo]}^mTmߪjyf.uӊVgT0{JawXʥepy/rqZE2뚊?6ղ5;5oOd G`>w  lB77]r e>{t.\;׼iț5U +v-nF @ vGd6abp^:_36amfZ 7&j-9˿($VdQc)$ƽ6 K[+&*l[6 5+.TPXLj2.Zi(si=欟bkXo며:7 v?R_y@ GUkyS"    ܤ^Ĵ]A%As=犠"坙}sJ     XQվ^/FiR,M1(\n+ӊ-?h !    %J{^?Q-=QH{e(RW寙{UD ?pWUU]ꭌiFuu$Ei   ԉ;:Tg]'5/\jN Ch T`D@@@ ?f*~~Ep0P0#BYbg_Pg" LR9XIw}+~@@@ 6Tz?bIku@j&XD 0 C@@@@vN {_1sXD"0G9Xq+OGjMZL@ v/ }׽"k;_TNC$#*K!) ^`VCy?Zz# [/j޿-͜YD*p:Oa3(3X;dB|phDJH.@ }>@@s5xb"8c.Z`H PQ[{'^wԲ} @> 2dBv@hDV;c"{h8)"85EnTޞ_R# o) t֮^_>E, Ɂ m] n.id5d[oܸFՄgPƵ*t s  v*b@ߴ|@> @% UWQfM^:T3*P B$2!:4+`Lـ>zWQ՘:*}pD X`\ySLTȫs DJj`̷ʳo3#,Ϟ3=97?ҫԷbf@@\6p3g8 lPmR(F bTy)U{D='5>ЀR0 ,@\kk:Z$7Ww7ܴyݒ%gwR ϧ3a:/aOફ?K./֛7:uh_S˯uGp{m̰:o܌YF@ \]݊?)Q8+(:d`}m1#!P,ιbV|`Eg!GU\ S 4K5 ԣMtg從kU³ܺ}cr0ا_t}L}f?}{h9kcO=L%m@*j0=2alRh?+NK-@8%v{xAelpm `@plC'ܑ;}n zw>qN^sq=~v]keed?Nu-6q;nzw!,+|kwk,}T7R5s%5ǥV-1=Yf@k݅ĀLݳk=wG5{{gkV^v??a56xעeܣO>窪xI}%r@(k4{|{<]{K0BQGy2B._ι[(;-?(lp "* @ wdr$ Pt~~fwh^Uw#I  3& bHx0ͦMc'-WP Ҽ !@ Fq@]۴&wgT͚] u:PLfMݦ+uBBXfr<=ҍ[K\pIn6{w@(@>}һRԾ`8WʮݿG9s.Gs.%έsoq>]6H1F h߲eM].XXNsYG_fg!@C=Ϙvi7vtܧc2˲@ 0SRj֭6=37Q$@@@@  +.xgM+vpcN^=9q!!_~Z䟝E0s3 ӤIn4 ,[^o5v=«5L:U  @E ]wX:w\4d+uqΥd!sEĥxΥQ&21:t/NY։oټ7󚫸j_fom]ߝqfvmZ4kԭ7qqMWwG\3g:ti1uV-͛O\Kl?SOB@@@@@d %);@we2y^|E;(ߎR :rk֥8Whx_==    ,busrXzvQ͡BXCLQ@@@@@0 SƊ$}yu %i}j(]6RS?v-[y#Uf-S)̢ڹgK!+݊'6of]ŗ 6&vjBi1  7nܤsoymVAy:yY. UDZ   @C&Ǹ=\]uʳ±*o2].ŗй#岁L.^gUY(%%|"!   @y>ZUb+;I1ݴ|xRnw؀S U U$!    C rY:;W=5lx2]^R   @yOfg28*󸗳לsԯ}sUqY  ysr>QQ?~{)*R.V@@@RY z{8]mǎt2){YJD_'#~ok:wV=~sDߔd(9];Cks-@99ވ_}eF 0 hŠ hiHjUl fl&S֟L۲@@*]Z~T۫- scu{;4ιrnδ·#Y[.rs=u{:<̹4U&R|u΍~ܹ[vn>윳wjb(9Iu_uihV XAa;L eJ9lkN6Kcd  e%M}t!h9{r4y Ĝ8b5Y%97ObFwoYױ΍zȹtn<~"@.ܐ[suεh;}kz 4=Q@.gTx皵U߼\L;d FlB-+DɊ xF3S+f)2)YD@@ psl_Z;wκ`U}.0I۟u碍o[;^;1"xԮ~vs\ƻ*A^% [?'sT%&9}?3XTKRs"ю(9|&}Aw~?y &L(c_Q JpC{_hيŁvvղ @@h:ε_ٹ?O3wc,مun;cxn?REN+&v]׹N'?ܑ'fH+Ps{۵w]c˝g)8l9{K;չVmk󙋿@)9Oq_/Lm:yL(,+(g+"лdxL-K֯a˖A@@"0ms]N/w'j8sWlzG9g[R?K9w }.ݖ+fX=o'J:皵pnݷ5B,ippMc):LI ܤy4`ٹӒUg~RsK={E޽o@ VH=]XnGYb@&O@@ho/$=X>|&X=A&ODл8VW۾ ν6(Q.G眽[ ʧݔ@9ιk9%ga.~Hs(95;X{(@iZn\}փl}-FF6I$@@@SMlE7s+v-]Ğ=Uןx3 (97uswף\ =9{rJ}Ιl՜:=ts T{%Tϝ{J8S%g駯k=F4Q,LT]pC~@>"5;#:  h|=9O;QĹnŋ6cw|:Rm 򜛢KW9zA3(9gpxlsP=d̀LJJu٠˒E գG-?9avc\h2 窷(8AEZAB@@&͞d߲b~>rsϹw =Eynqm'!Rs\5lRs;87^yi1#U@ιuf 5u]snMS0 "lۧORSNՓD+l0+'!  @=Qxܢέ}[뛶4(Ӳu1w&=م;5`lR:fcsnDѷs? ꜳǔÜE&; Ozwk/+(PsމwYeO|i_`$UY*[oﻚЕ  PD&u+?QoK"uviZ̻ "Z @% n1z/k7jˬ_CnVwryT,Яj|HDnynS U@99{oC "qt'[i; pnW:]s.PϾ ˍ`@Ԁdh7 y{;ቊ;+j@ݥj,`2U6Og( ~vwb=~j M"hk3 @&Ьs]א{ܭV. p5T*PsW^d8 ꜋} 2<, ;ή/aV>{Q`^(awo])p}(wJr)uݢ;,- UܮHKawR@@@@(} P>oE v=vdž /.)V2gZK%( L؝$Mz{_auHff ܡ8A1FgzO*,QxDAB@@@,AK JjvMIqr2Ӧ)l.[Oz`/]scCvW T)WX~ x~QܨAaPZq"[TĭGa#l@6SܥPyL*̜je<@J_g; l ӓZi쎦(R2fhwPJnp}6󁂄    #dbt0 Z9–IQxj~F2+4I3)l@?QZ(NmM~hMa1+rkju٠{5 *K CCbO맥|/˴wT nhXaޖȮ~NStl2i9 ry3c̀@@@@SLE7@ |k أl]_[a*j,ـ ؠ% 4gqjJu/+T۷ذ\526͓k2SKcm&lPv'ـ?k]6cibbRgzӕBMIdǘ    3dbv@DF`ZRb.۝4{B ]ķG-8Ia#ޝS4o)xIDvA~cl4+ Ri45`2V\fy?xMb&oO57PcDxO    @ @|6 {SVWɘEsb-ssl0>ǷTxtLb L*=+NPK+f(6 Th_80-l`ū @FjE J3x }ugWP@@@@b&`H wأVRܢ8Ca60Wawl'fw]RpBaV1Z))OKB)m 6DaX/eW.n @Y[U^qޯsSE.d!;w+Vqd0[f0?U5w*Vؾ(.TQ U@@@@b&L(AP ,Ъujj>R[PW*좽l?)*l`\| [Qq"UzO6 c[v1S%{tY>xeSM}#&*>*%ذBkwXMH1/}     2!;Ⱥ{?<߶g^z饂tȑnҥ5aJ6`)V^ۗ\i1~ǚkoy /ZhQ͊aDId/Ӿ-    DX`{Eu FO Xul*ٚҲgvcN`CUe~.\Ԅ     p:i/v ^_c[UNBE/Ql5j;[JjA@@@@8 4W( TyRvbc4J P6pb8    $UPx4Z+߇͓ lU5R]}OB !;b^hyzf@@@@@einQ*P+WĩR=lyE{ jr8    U* vU}]"o|@~CF;@e[|ETX 2E<7Q .0^oc@     Pvuf-StP*R'<۶SX̶౲2Z># 6|E@@@@ 4WLi|K]8OQ~bMJ?Mۢ*#dT-2_$(p:߅='@@@@@ͪ3{-^DC, mٶ&3ߊTl;9P+b4F 6h:Y߄W !    @ VUl`.NUSP6,| #lTr I: v9xCrXD ;e '!    @ Qr{r)-o%\*RԁI?2yжt ; j~Tš@@@@je.>wR[+|GR#U-ʒ錂4뢃"[.ź k !u{oE?G;K@@@@@Vf?U/[n^9'xD6ZRݵS7pGa6k>;bRtyW5ЁjLsګvv}UUUubbuWWW;i:IѻC8y>yOe֜繷4<%KeWtKR lk_    R`1INW,G<3#i̴[b2 _+Evb1`Lŝ t@@@@d"{gLi~S2(d{UbYO)QHluŭ?[V7=8@X"    (LxIK@",7<&"    (K{Eh!5~V$1,}qn6Nyef@@@@wȄ@@Rw\i<@@@@2><4hK/k@    ^"T@zP瓵|:"    `@&"'`1#m|=_ycg_    DF*T勊ޞс<@@@@29T4n>+|w E@@@@"%LEb+pz6\Jy}bk:    P1*t@ ը{Gh`L @@@@'L-F":rb@j"ō|@@@@2=t4@;z_U^Lײ-B E@@@@"-LG"%`ZrߦXG    hx@(P#n)1S0#    @C&~ǔ!ahQ)kLl@@@@b!L,#@B%`/G+TP3(e6OB@@@@  9@dC6J*W潊kR @@@@8 0 ǣJ@,쫰AtiVܩ^cB#    Wdzd Pxr\CqbJC@@@@  7@qMV]6byE.M\6     q`@&G! (6Moǒ嚾PAFC@@@@L@ʪoM L{7$MFO)l} @@@@R0 ,@ {?%gM%5G1;9TH${gKdM1oyޠ7BTK0P@@@@ 0 CL@Hw^ҺɨY ,MnS*a^R&yCa06󕂄    y 0 '@7`X捭@R=c^؝>$@@@@h2dsXH($G$d | !    @),"@We++(VRwػe]/x~O^:g+}4馿j=j|h4͒@@@@@^,e j l=*}rL@@@@@ B%;_hT?a@YG@@@@@ l      @: S]]m-M*EJRR+n@l    a(L\(~(z>0O4f                                                                                                                                                                                                                                                                                                                                                                 U{g   @DWUU]"4zweyT1 qᘄX8.'Z0ͅU{VX8.cҤ2   N HE:RsLjr\8&;&"K⸄ $q<R[ᳫ1I/(%ŀLy   VNV",GaGXccR+10x uUʻG#X0 #G@@@@@@ b;:  IWpV;Cp6תJ;& l;Sce#Ct)}螺{}YC6{U\iN ߊ)+5 wȄx@@@@@21RNaTuع۔g.Dqa&jEGEkŦTfιK4.9_ 2%f7   P|>nI=.zxD71Ssozu;7|3V_;]Yno[Y w3,+gt#' uox;Enђ'&X[MN:7^]pL*_@Ju8~6@IDATyfnWqxùn(2M}]mwNwooisuKWMZյ^nXOMwm~n¥r?kյkf˚un:ndNS*lt(Ň_E+NV`y /9 ;g)VQܤ!]1+*),uVlT3W   @5=8n}UUU;mwpR?p"R+X{-+XI4hlAnDNl "*U?;޲{{}_ovfvv;Ϲ7kԼ;vj:4}[ٯa5ۡ.6q֤Ih/nL^}}4;xӣ/{_6?zd;`{v֠ m Wfء}cg{@JI`̕69 lQB]ӞuΎYVYv{X䣶`߾Ch3Vڔ[N;^ʛmxOε/]hNZf`;߼IuOlO茚/-/fni_d[hxPmneR\|D7/m#3KJp ߫< Q-!o)~F/׋9[+cQJgdU!  @ >e[[FgƜ: vf1M\Wgvh-Z]hyTsx]1Ɵp?k+֢q 맳^q;gIOYsM9bLhF3 _ Y(]ֽldN#5Glt޵za6Lol+m_o_(v:e)/^i_*Ƅf?qB>/xTN n 1̮Jٛ>A1^WNVR|7cr(xz@@(no}[ۥe`pcG]R,5*%_zY{ָQjɗ#tCi ObЎ{@ ^_%:>)f#7iݏW]7Ըz Ee_ӪwYi;^ gcmiJao×vجYYͷf=ߕ" p**+Uޜ\"  E+Aeж_^n_/M/* }V.aÏlڟ O/ڴt1Ořφ#S%<>)Ϧ#@epe\qHu ,S U{5 1߃f*k{xk۷ow ]"翷tZD vجaI wYYͷfMzjeřtYV  -EmBZ׮On<`9tKt{ljx|Kfꂏm}vmMzhS~,'eVט,hMz>)Ͷ"P->d Z>{`޺d:l׹2졚3azx==>jUol`hn9ZDC2Q 3_[0o?V ㅘʡheo+    @3۹6ۭQma+e}8oBJkܶ{)w4lg{Jˡ=b_ۏWZ}yجf.c+{x#lRx)>Ͻw'b߭\`?v"']ff6|ԭIe's~c׌M^ʶvm1/lѴ,96q'QTTJuYk[o\-[ڌۂKSckg[߽sY<oMӗ֢{S{4pYZoNZjKfeڄW{x|[tM{لf(~M}b[)+Tڎy/^rr]9Q{,T~1~UMI e-vQ k&K-Ր@@@8);/fgnQ4-u#,Mt~$!rF~|9fo/lO5hjMʛ?i-Ǟh{gsl97ygh)Tř=mgq ]r Ϸ߽6pky֣utp٪eγ>W\E+!mcO|PQ/a6{|a*~TN~5\-NRF歕c)4w@s\}]83[`속C2M^1~^Ïi{=: ng|rʛ4 ӸUzn[}awhe466 cg7M>NY-|F[T* ME_]~e+JSo4Fi VUswP Yl!ElV  'Buߟ]_4o޸ݠ޺eǮ6g|{ݲl֥eWke]w85u-:[mҸ>u8>ɿX3  䐬w5?ޏloӿW!fMib\{&f V̳޽>z]AWy̰ }żjY|m_h%T5\FKiī״'h>HϒtUj3E*JauU"  /_g8vִvmP &F.G[ SVVZ_'Icxۄ~Zܟzly/bWq+vCuV>o=Gkv1/m5^f;h]nib? ||h[19hl0wFc_];y˳j>1uo⪮KJ[ˇ}O}eн }`*7 wW3{M'7F{v6xfۿe~ZttWvO7M޵w8X=Ϻ#l6=mT+??Ѿ\<;ptt=3(v貋t^Qz,&x^1lk?Η*1red WrczN6=2toSyL*.ݯb=su%]}N:#`==d~tEbQc>A}xlc ~'U)zן+#SvRT?7;sTFyrs{ſOF(.K/yĿ?AyGC顜xĿcg*~qo)n2\;ض&[/ͅ=qr[~n~TLZ]em4<`,Ƕ곿g*+DIWU{K$(+2NY4o+?Gx/kvrRBL l6@@r#p_ځ}1UKB +fancMZ OKY7iWaSֺ}zsk{gǘ?k?qㄟ"Zb@s3*H_ TByWgJzd-x2oD줼lo@~Z;V|k^[~x&X\L,Ǿ'vއwy.UKք.Sn`sˇAS|Rמ*%]Ag s^,n {qfV=~ىz8\yGz/amG-9ə @@@~-[R޽jzT &n"' rɧ7-kv9TYzx7uϖtM}Ӷ>7T~s8K "CZ^=S*>--ٯٹcOKfM{e~9F7- QzZ;~ؿys:?[ۗ,WZ~nvnP1Ƨǰg6—m ES<22X+~i pr/-3^Z?1hR9JͿsU*]9CB|ZHqr:+^E׆iWֿZbJk1!͗N9O×( TJQ)Ϧ#  iO3[;o$H;Jp_B((O*ѺWg-cg'zY3?"?lEy_FICʾJ7%Ř\9K%NGFljß?>n6HMӶ/Mi/t*tՂ'N-:nzcHeu5Ϳy.eKe+fJP.RZu\Mo'~/ /\v)Ա;=2Z~ͬm桬ʞc?q3.|}g9[Ee&3XSwL%濃)^PU2Sтk4?TK߆xi5(ނLx,h#ƶy VP(! e="mpſ SM𼜗JȯJac*^˥Rrr^ᶳj@@@ ~V̍^m;wN匯4 ZfqM_\"i|dktZNl l]Z-"LZ.$ bKG9F§?"NVr(U"  @kWّa iO#UY"z361F0sXoTӇ#Sﳋv42ڧF{Ե/xqwd/xʚ7] xy㯿vދ]bpMO\ホZ]CG Rl6hϯyUy{=2;s^<3-rv61:l}tl~ZSSl_H L>ޣxm˯#g)Jpϗ4|s7kJlRV2A[)JnQ v   @ |í_<3Tx Vݴ]ִ`۰y'{ gBl׽}Έ>71tzϞC rӛg=6`♡bO']6Gk:oֽBK2x@6ɪ5OoO1t9чdvRº:@u+ewIvY*ЪwUo3u_gJzPbM×mIK5<42o×%WXms4!by='po~j d5F@@@fAآsh[`VM&~t~_8l퓵U:/Ƙ+_ލӱ=,@tb[4. 60I 1zAhOx {t٭5iajz U 6 M΋66:2}Q5*9}@9YZO "Hk9n t~JlnF@@@muKo4=Ӯ_U~+QOkΝ`[t ߑieu5&}b̕^obL B@vm+U6֦_3kۿy(+毲O\bm6-dƘpa&X?**9屭Ky_[=aŏ{ 3p/chbO/g[}W0MaU4kwk?NolSIވ,=x1oJKŋ2N>HAis ꓠ4iqGL Zk`$΋&~`~3壨p2T.%kxro>m^F /wmQӼK>wv|L̕ZD9D쥣k!  IQ/u-΋0V-0)kRv6~bGy/bw}x7?}s4kM<}RUUeϟd׮ _>}q֠(Ȭ@'Ek2/4jRf]Eg|p,ӕFCm: 8Q]~]R wqrWh W|>4쯍&F&nWխs?%( ԰_e79(޼ m\a_ҔMכڰ okc뻪К[=чA  @ k~co5[YY9Pf/et[Ԙme;y[z}te~ZS-:ח+(\ϳ?h  iClx_w e鬕{4nۢkSzZyZvkjkSs5U U \7+?3K@6̯Ryg$j#'Zmg_ ~?hC5?g[%h^\q{cyf⯝lWUy74o;=,V)~Q%m7-2YxJCg /PT*9<-;Կ\Ly?TjkGɍ[%21~lyP|Jn'ŶiWo|EL0  @I tkcbL4DeEmܶϺ3gc83̸T@r)вGci۷yJŘ3\zMl1&z1V?φҟz =5/82N6PGy\+m[K 3o)!g{8mOPyDIGG Y'~+oLdcהNVQtⅩ*[+>om-(>`5eT_ޏc7bc﫠y߮PSF*_7oɘ@@@@@ TEe %^Ԙ QxѦd4h^P`NW~TĸS(*~̿o+~yXg\x1gҐ6B/ %wkx-ohZmNo牊%6l_ߖnJLɘi+K9Cqo/X}dy*eKe}U@@@ Yf':40ٗ0_'f  @ $[Y7[iVk(*~+JVj׊ci28"ы iFx>jZtzmgm6SU-J e8^,'<y\W?xW"`@ bO{3D;=&c}Y  m̟uO IG엜'\qk4|+;E|+헺YJx\$S.RV^xa h~fwH4!D5ڼFh8Ͻ4^WVk)^ea4FϣG OLG?%yV &w(uE_4jQ-Qe(@@@@@@*, /,SO?rfŅyV/2D>'ӟ]⑊[LR~ɲy0!j8ޠP~ J2t(ujqƓ12&!  PUUUʺ4| ܢKi'5 aOox/%O'#9%7c|tχIzQg,ݕy<0kH e{<*/(~_ld#o/SZ*A2D ?wL0aMșʖ1x/wgUCN kI%cr@@\GWmhvc<>fΗ>ɿ}=bK|O#jʗ~vUI>|X'^83K<( J%hok`$}^?@G+(MO{R9P9UʕߛŋD-Xn'"9x]x#/RBio*] /v2Jy<[(/mr2L9VkoT%/M_)~f͉xŧTy Hmw|x$<#     @J]4BpG/Bs"Fm5VS$z-=~rW*+;)ɴDb_^̿" ggy|q7/BMSfkz ՓSJz$%zKhޠ`A9K Zm/$y$<#     @AP/[QOkOGs%8{+1~L}Zk S]N*]N㵙$ C@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ e9X'3.lXYYٝJ PɧA_@@@@@@((ne@@@@@@ (ޠ/      PdrQ      Odio@@@@@(J 2E[(@@@@@' 27      ܭl     |@@@@@RLQV6 @@@@@IL>      E)@A(w+     $@A&}A@@@@@ SB@@@@@| O{      @Q P)F!     @> PɧA_@@@@@@((ne@@@@@@ (ޠ/      PdrQ      Odio@@@@@(J*6 @@@VeOHWAʻ~1Yx5@M|hм m\a_ҔM= ?ymi,    PdJ+`.U!},3ez˖̮+-KEdj/wZV d{orr2DX_;AB߯x!} /+L J%]-Jm}Hu]V)+)It7DG@@@@ C2\[~I((~OBo?j+)~i#S 2{j= ŋ5^?ˋ/L}gU e o;*o)=~򈒮_~>g= SF**^PWsZ @@@@ v̭%܂|`=v^סeC5-,c{(^(YLR *^byPѠmY0+^d<\ ڝ@W3`x^7~˕+/LTzbm'%CySB3Y{)E^glܨvM!    _dr?.aeeew*r؍j2z>- L0\f+~&KeżX\14Tׯ/$*5Y$ ڬz mbٱ2hUq[;M]g|_GMc6PnV*/lV2oRYo    @(U"@ ٤/\LT꺤k4?0/xrrrEZyM!>ˏGb_ V|>_NeJq>zs?    9 #xV%)𭶺,๝gD/g 7EC^ &`vz5Zt+ׇ \7x z:?ç2GY-    @N(䄝"Yյe^\B    @A 4*^i@@@@@@(΢      Pd sk@@@@@(  2*     o@@@@@ HL,     )@A0F@@@@@ S@;"     @a P)F@@@@@@(΢      Pmz@0N[~ @ /3{|@@@r.29tDbLH6""|!   @(xz(Ό)]Ɇ Pt|*]!  Bk@@@@@ J{ P(_@VGC@@@| |@@@@@RLQV6 @@@@@IL>      E)@A(w+     $@A&}A@@@@@ SB@@@@@|ȧ@@@\pUYkA@@ @@@@@@ pLSYcJevf|   @U.+n\   N?2mV\i|-\Kӧ@&-ZdSLe˖%D@@ ]eUt-   @rz?2yed*}B̙3 ///O{[oOsʄٜ9s{UVYee~ַo_ˆY1  P7~l5   o.9r亳&֬Yc}Qz5iҤuˊ^71M^XI.J8CvnȢ/x~a7nq"     PڀJU24.h g_UJІigЦ ?|8H@@@@Z&(~$:k5~|$')z?M&vS5_r]c|N ;~!PL*&#   zC5e{P~ /I=7O]vB+PP 26(Ϥbܫl   "pf^D]W /-Y3m7R}_]!uR L@;L3@1 3.ۆ    4ד*}|{ĮS! 3%v]>Fv_tbgRa@@@$~ZooܾU̪Y |C۹ZIq2(fĘq%ߨ4T*~Zn+?_AF 3)ܬ @@@@`oajƇ)l^x^G*ñEt+/f4'[i3 }7gTs09gRwG@@@ x1:[=~1i&uOO S)RnbU;%vTW+DmO=ql'~fǙ @rLJΉ@@@@ {)Kt=X+|څZp6_S-kn%zNQ4mZH@;3drdj@`}~&o@@@([>J;'%vy!pLDH@;LY-/ϤM   %S]D(F~9b2(jOiEm祙1~K[iGEv1{-Y[yȤvLlHII)q13>ôk_Jgrߗs4`IC@@7}eς}nVbh祙YJRj:-Idr*@ ?0jb 8D=i. 23+a@@@ jGHyH6~7>K ;䫙;gDzS/@A&3) @6 2  iKmWl9L)=Py|^Z!_|=eӈ5o+  @&ʴKǡG]Wh )Jt׌L|塟T>;+"ûћn+?Xg PySɾUQ  տ3 =<ʃmD}Z@! VwT}EO - ,z*]I,H ⅎ{njeS#QV =Ri)~)h݃Vhʟo4Q((7+?((o7Pnr2B GO0 @@T'dOMVן/f#D4vRqȔL hP.E2/@3/{J'6+#ϝGo~扟G"9=΋ Ŗd 2^Hey~̠G/߿D}G/|⯣ 4@@pɲU _Zb3>LjZPs蟪ͥsVVXޒ̊  b+N^9T}BʩJjz#qj)%tSTVF+~I=/:@@@ OSW)TmH}im}|5ۗ6o&S2! |ΐ,Dz 3^lRHe^3b3gl/|xK xg}_6CgLx>˔&L8   a,{@xf|4@>uU7|з?%HC@@ 7MgĜ)m1J]Kv 9ZL /Iӂue!  3 2/S??;5/[8\ybDěg'޴l/W^TVc^ɇKգ@@ zO^BZ+Qm` 4TWo8?KfrT &1@@GX/'5VtT= `|ا 䳙$ 5~op_eP@;KeЗE#@jLJ͋@ $XtŐ~ʁʓ?;wP(VWg⭝.YMQf(g(a/TNTPW(~og)lfW||  y$okZ9Iw:&E?"-PfqL q5- A_ 3)5/F'?QAƋ.> 2.RhӼ5Ϗף=yW͛_̟Gɿ晤D5_VQCC@@<?7~R R'aZf\tX wQz!aSɐ-E@Z} MVNj/hmP?c&QK%Z&   ;Z2,@~HShꩇ@uRF(~evKd9 ~&Ce    @&~l?=BSw Z`};rI9Sg*Z̋gRY>   4D^ YP%/Plf~=n -L3(C 3A|@@@5u.aeeew*Ҳ.j2z>,u]0fʡStk~V/!@ 9/DϤh @@@ȍ@tX1E%ߞt$ZfdO,Mg0K$t@@@@@ {,1::)؞l#Ndc۔b۞L{|@@@@bWJ~ (y<    @ 2K^9歜1,u^    d_ m,~Y# W7tSC   LLY1pm=l9\O4@@@ȱ@E@(3{^ҽ(6@@@@@ pLIY       PSLM@@@@@@ PI;) D@@@@@j P      v 2i'e      @M 25=C@@@@@.@A&,@@@@@)@Ac      @(Ȥ"     5(` H{ZJ4mrݺjħ5|"Z[d녾뻀QT&̟8řL 'zL}3@@@@bѦ) @^^U6͞/SL52՘ |,LQΑ>Lgұ&Z!yXDe,G@@@@" S;MCx5ҋ@M|D, m\a_ҔM=Ä `PA^    PLB@"RDdj|rZV d>Wf)*)C/ЌU-*mŊ_2IݪLRV))趇F^Q|= 5%I/\5 Y@*(E+9C{M9^iHT|{)+)Is?    QDH|-% SPX98o~&>OX˟ RZ*s7_ PnS|#J//̼x᧷Gu{xQ駌P0Z)&x[<+2 =>_P.RS۟5rv8-֗1FTUu2XIdh,T:8fJX a %eFuUXVEօYTFd!.EYقuMTVE"fehef31y=9 ϙl|^Ÿ9 ( ( ( (lt6qDkN!{,+lJ&ˆLN /I*՚bAWX%y3\B4D[i%גkHi|NN#82dfIPj&Gm Md/(ku2rpyr̻z2{y|CʚJHߖ ys"OgL^VV%9d KVFN5̚O^ KaR@P@P@P@P d 4HC&F$}l[Hi4 Yebe]э0,+WB2r(re1V.8hRgW /,Ӫm9 rB^#בjÈMKP@P@P@P@lt^P/rH2IfIRKfWoB k͚TDI8c=Z_t@̜9d}i+cqJu/b~rloVH>d<׋$)9X ( ( ( ( 4pL#.V@z,3'[miŒ&y%VkRi.',q/y7[I"Y\d yt|eXKrܞ$3v%ilr~,Mz`+zy$M{ =T@P@P@P@P&`C (Os۹vfwj3zfϭCEL3j˚hRVO3z؞Fa$3p~"X`2C*R^~ Hk̲3~M]Uf쨪ϷG0:@֓Zٗ\* ( ( ( ( m8P@v:mx4B\n]UuiBܗϑFLb) ( ( ( (c~kX^?ԮzfͿy4k. ( ( ( ( (@vk- ( ( ( ( ( (k YCcD׎i=yq)^;۟ ( ( ( (GlLY4A&{hLYf^9Zf ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( Q5~mIENDB`concread-0.4.6/static/cow_arc_3.png000064400000000000000000002371211046102023000152530ustar 00000000000000PNG  IHDR_]j sRGBeXIfMM*V^(ifHH_]<@' pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATx%U70( "ILH (WEaU0(T@5EQDQ q$IDr}vTn7<ϭ:UꜧZoWR4 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @aIJ1D- @ @ @=&p~wI#c"Gf'12ʉcs9418y֎'Dn,凱}d&qM݇qC @ @ @@ gFy#N, w$%2m(Ni[j6OG"U8:F\:<νQl8+F䙑GNY6N9֋# @ @ c ǘmޱwbl2λ#QMkpwKӑE/ @ @ @;)~6{#[>,E$.C"U˻ɻPcN#U[9Ȓȳ"3NDYv*m˿<-1TE7rȏ#{FmX93?F^[姑;M-iyPsjZϕbë^W,x @ @ _Oޘw4ZK"ɑ{E^9=rEHoU ]lfAޮTEEjX8,MqdyK>>-խ2%)6ǏTu?R938z[YwzY=r9TꯕD׺j_c!k߈]uȳ7FnB @ @zZ #Si8xiwxd%UYbDFNd%H^zY˗:P/r;|ʇk5|+HwȢ;"irgdHj0ZT[o.Ұ"?08,$<|+ƑvDzcz`$X_3 @ @@XOEg~o5(ȿE.Kf:%K^oGYxl$p䩑wF;42Q5v}Cz[?V?Woۆ;xvJDrLɖŏ\]dȂH5\^%wwZY3ٗ&cbÕ/ 1h="i+;b[M @ @xB תecje^wDإy#C9ǹUdڶ)߻u*d_- 'izUǘ8]q]) @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @X`fhs$0g |.]>z? @ @ @@_ W1Ke)=#B @ @-2З/uAw2 @ @L_@e@ @ @X.0oc,1~$̀ @ @J/]u9  @ @uŗ^O @ @tKW]!@ @ @z]@ׯ @ @ @]%U` @ @ @^0~ @ @mcm"FE[8&s2rydID#@tT/v2 @ 0'kY\m䗑k"'GT퇱jK^OqX}\yȞlsLԱ @P|i6 @ @`4U,<9Dvz2ʱ首w M0"Y|y}䙑EV+2-3јf|E@ @ @m +{NȾ|1ybs#'wG]ɢ#{E0w|!rij,|ngFʅhyO5 ;!_IԉM Ϳ| @)E"j8׵#KuH,ے,:yJ, d)Z=?\U&1b&UfQ~~,m;bVӢ# d!睑F,U˂LUk'lɖ6Snf{fK,: @@?S1 @ @`D'5DG6wt.k$~*rs^BŢCFH Z/#p=WXsؖrmHlY3Hղ@m?gEg,d 3ϋ|AK^Z#@ @~;Q<캬+ 8Y`ȏ#7F V*<ⲶZDɻh̢ #Yk䅑"U,ckG oP12/dΑEwDȎe( UY4Y׽"߉li;S^^oFA#@X`^ @ @֗0 "yGQ^o/f~!Ne{c䳑m"#1d]t?>]Er["YyT* ?>dD8o93#FQeYxywd"^"FɖDsX[G2gEG&j/rh^|o5 @}, @ @@]/wd3#U8wcL8_(/8e^sn,fTb͑ΜOrAsPks=?6;Hnwc~6_ce6Y98H_VyV$[%ţlռV44tsYgEۮ=܈yN+7xciV[m5bTVrSy @ @ @K]2(B GO_G?Zv?Oe/[n r)sYv }ݷlS>~^=cx# @ @ @`63}vR,}.:@'ӉstI;"@ @ @@(t˕0 @ @ @/|K_ţ}h 22 @ @ @| @ @ @_&o @ @ @1_Ƥ @ @ 0yŗɛy @ @ @`Lŗ1il @ @ @L^@efA @ 0T/Q @(/"@ @ 0r KqwS@e0Y @ @ @$2KK @ 0[)c)a){X)oۮSj GƶG[_~[)w1r .𧖲>Rsk-ӯh-;Jؑﭯ8>RۤW̋MK9qR,eL@. @ @JYtn)y)_}gk[GRNxs?.o)eJK3W]R޳s)Sʓpg﷊9\RzQ@hk߻>t{KϿ\ʃw)K8c{Z'@>rF @ @O.87~^T5)#-O(]2o֤?R\)nZUW^P) ~Qfj~<)}-Y㮚=,q/+KYxB%}k.ߎ /}qM @ @`T⎖,d/ϊnjU/Gfݣ`{V^r#ȶyL)OmK9*;Rc)VyF}+ҷ @ @/UT=׍,e KKѷM?r\(_}-+Go?%}vx0Υ @@ 34A @ @Xks_{xWRޕR[ި>{ӵA)+g?/i)cR|lQ)5;m'@P| h @ @ wT-%NlVV[,lpxsF;Ar>d_e6lk6/^\>//jƅztյ#_ @ @̒aowofSOm?R}r_f'EzB6/b]n @ @ 0Mŗom5-ȿEnjc6>moviǣ}f 7qeQ'D  @ @ @, 0vC"+ّE檝'}ac > ^0-Պ!}l}Ce# @L]BW/oy$޶Y*xlz8o8oM㹌0YL9v'ʿ o:؝ @ @&)29ucO-7ob,ٲ/iVbߊy!۳Nl{H @Xtm].\XN=rWt9_lW^yt3+k=+uP̠0o<,{Xz/\k8u @ @/ʻDN!{ce{A96D!Ҽk;b7 @Z1ʼP0  @Ό#pG9.R=Oa?=ȵ꺶_#@ @"HwV=2cyۤG.a @^oX˿<`zwGƟ`w_֎\X}6h @ @+R7p>b/nK0V 0YEX-/Eld5\&55 DE|=.i @ @~wQc_H}9.Gcy}dH/Š3Zla?E?Y<"F @zU`\~cc C5?chhuNwҥG9COs2.3f䕽 @fYqs"B{f8q=k'1(Y$@.Xi/Rg2$e־U'J+ @@2'#X@Χ9|?kx- @@fx_zӉs4˝O'1Nwb>8G}Nom/'@ @@&"|4}0B~ͻsyyJ],0kŗ. @ @1CFGkL"J]*б`/.%{X 2la6ydf&/ @ CXS[baj^\4R}gCc- .ΗnBG @ ;EnP5eȱo7߷gP@ /! @ @ϰ[] 1j[+}εZ$@@ (3r @ @#՘cF_^:1S}V  @_ @ @Mb=#zJRiLj*tK] C!@ @gZ?c~_\;jU+^  @_ @ @@K`xyEc~__?ݘ;c}FUŗ.A @ 0B`^5z>rkŤM|X>nH@. @ @,xs,m|Z\b٘cY> /]p  @ @`}cmR5mQ  F#@.P|颋a( @ @?תY]obu5c9E#@.P|颋a( @ @_E b}'{ƌqdhY%@9P|C|&@ @!j5_g}zp\UCX-?Q[Hs,2  @ @ \VʒX7 b˖gH@e @ @`=|&_]9k6 @(S @ @CY֛,6[ GIlH2X@e/ @ @gso}VG \U5 @@_: t @ @#kyK,VP<{b_k[Wj  @`_)  @ @ 3^VX/gql[cSRoʞ Yŗz; @ @ mc1/Vo;,O(^u}V  @C/v @ @{|"zV^˾027ƾ4d́;% @ @@0xZXgj{5߶C*t@@NA @ 0B`X;bDO)}V''pxE U^  @3/qv @ @c~+V]ںũ cx}V  @`_f  @ @Flk)#~AofyG> 0/ @ @hyVm,}1| dǸvH   @`/7t @ @oD= [j/8D{lDh[%@i ̛~[& @fAqG94rDdȇ"X9ayQnyWOFf$?{c$?gD#@j @ @fJ`869/rldH7T+^g\Q|w#F{w_!5AY+,]9: ىt:)0s @]'Q~H=$V^P?7άq4wo}V  @`Y|٧_>yDs[os0kjN̖Sm*l#@IDAT; @ @cbL3"8Je4QsDB @ @ @9|wim9㠜y1K#\ y @ @ @^u s%c"E^}iNH @ @ @L ̇#E 2q[d @ @ @xXpdFZqG&{>; @ @ 0MCck@~KT&s= @ @ @p/}Q S65y]R @ @MSc|oޜbύz9v˧znvL @ @\x/GܡۦZ Mny x @ @ @ }Clc`X;"c]`@ @ @s"c}^l?)FqYdkA @ @ @e~ >]%qh߻jC @ @Xघ{bSp~rn @ @ @A$&Z}X~ҠL~Ku~1| @ @ ,ϊbV3cGfL#@ @ @X`8~pȪs<@о] @ @ @L k8s&0gy꜍  @ @ @ev׳ 0Y @ @?3l>n6uX`ҥG9COgt~fxJ'矹)xc 4 @ @ (tBt7*?swl9.AP|+o @ @@k;۱zmu^ᘥ6L@. @ @ @s+0onO @ @X7>񂅇lڛfkA-K%0(|+m @ @ @@G_:$ @ @ @( ʕ6O @ @ @#/av @ @ @`P_J' @ @ ŗ0;  @ @ 0(/r͓ @ @舀KG @ @ŗAI @ @tD@#NB @ @ ˠ\i$@ @ @:"f'!@ @ @E@ePy @ @ @P| @ @ @"2(W<  @ @ @(tI @ @ @AP|+m @ @ @@G_:$ @ @ @( ʕ6O @ @ @#/av @ @ @`P_J' @ @ ŗ0;  @ @&rk]x <>6æ>j񔣦~ 9~R>R޶]=O+/9;G"@M=B @ 0wǻNxoox-|魥\}q)wց:ń3w=n*副h1R~RR6bB; @`6_fC1  @ @(p1-.w) @wVR|R)y~;<#[ʙ_()c~ @ @s%#KyjR?ćU鉥ףkK9eJ1>dJ+KoR^Wߦ)KyN|_;_rSJy>t;Y|y)o;^F\zR>R^u7^]cP|7f;gKY+͟lY 0G/s @ @ \RBKW)~(r,u^+(VZ)Ky.[KxR~R~|l}Y0Ytn)GejwNZPʹ[[Ş=]Ux\ӏZ?8*qxͥcӺKvݽ?S 뿞Zh} a0 @ @ 샥q`kyjݱ廔k?K/_Rm}bkΔK-/(u^_FR.8,\(fwߧD(9㳥1SG!guZxX)跟<:]c'9r5tP@NE @X.j<+xښ뗲[|hՃJG)UyYxɶq'Li^В6}@k[nh8F~ֻ3w]O-OH~l[^'#'@/?f_\F @ @zN9[LYk`E?-ӯ,eww|lQ)bKC˗-,]b}J5lyh6kil3dI)(|8xD_5?G`0_:% @ mYh9䑣:k~7j/K?XCvGǮ;nIE֣.UuɼoxI+,@?f)/P|ѧJךŭرtN @Q/+E-%7NN+e״lmrla^[5C[}uϾ3>WG_ʿ#̖,.O 8{x@̝~t`߃t1#/gP| @ @FlU)޹E(~WFa%[*k=2YfK Zws[uT[Wʱ򩗷]}R^wzcoG'77rk,o?syץ-eq|l{Uib 4ٱ: @ @g,X0^ =xƞnr'{5~S)ުU クV$ $i7]Wmqʂa\܈K<+37@> @ @E %3ƥd&Fc?lrGw+e$gj @ @ @(t) @ @ @P|km @ @ @@_: @ @ @( ε6S @ @ @/@v  @ @ @`p_Z) @ @ ŗ ; @ @ 08/s͔ @ @耀K @ @ŗfJ @ @t@@NA @ @ \k3%@ @ @: d @ @ @G@ep @ @ @P|S @ @ @#28L  @ @ @(t) @ @ @P|km @ @ @@_: @ @ @( ε6S @ @ @/@v  @ @ @`p_Z) @ @ y8S @ @J`Cj>&~!/u  @ @)u] /,Zy8#t-K,Y6<ۃJ=K-^\pe޼y73kfJ@e$ @[K^>44hW򕒯vy}.w޲q<(|L@^?\7w17r/Ɩ[nY^5\siܨIzÔ鼑 @u>p袋g?兗ص,՗v ,I^ vcd?gnGX/ӥ^Z>ϔ+Z?s'i @ @s$0YYȧb[[s:N^;-țc-Ka& @ @^ubhߎ4?#tkk[i\#+?sxsU" @ @̂F}o!;@/̭H9\IdF @ @3(8?#f]fX)ȣ#|L @@~!ѾѿyM?!^CEWs$ѷ(EΏ/,"lz ϟiu>3AIq4OG}d&q4i~}o"Oh @ @3F6պy𣌭{:.̨@Vg`?Tc~2W;<=,0o|}X~A$+햿Yi[j6O$cxZjXǨs5οQtIc#Ypkȋ#_yLp1qψ mG I#@ @&TH]UL@ ψGD^*/[8ּc#SyKDm!=cёlj ñ-Qy𶏎m]A5[rolnN @@b ͧQcC'v:C=S=hW˽o4seKNʸ =3agoXu5oG].C"U;ɻ_\N#U˻!Ȓȳ"3NDeYߎTţMcבE[9XαqdH+gF1H=3V~c_֎ EXGS?g8vɱuG5sg<ؽLtsv-;G @: 俽Yqc}H;_#@`"˟ӾO٧o uKEȕEBA%C]ajBy9~~u%1BR /7FP%{G)}J |a~^}bHվ <(^@}%n#y(d!j,d/y-)ex"IŪjs#w9^=^ϗWEo\H:9X}*dx+7L8~;^Xq4rybצ*εChUZpi^ @ @@G^g9XH?rY#03)Ҝcg ny / (ֶ/`ym"}pZ~mH>kH~Ŝ,Ty,g[dgZ͖弲r?sNH;rwHժ]Ru^snuڦe띍UՑdqlۇ"[ly εn(jt"yɍmV  @ @@'f;+:-rs? I1%S#a["h Js~b$x/ "%<~~&u#ߌTzXU7f5 v}mUFX$uLdȚEvUGu)+Hsy}IQы4)]+5h.hbM,-VbÎ DTPһt {ww^)<3ٙ\l#*mǂh{`䣉~:[N3lCu:΢UN?*VY   G6]M?@ hVSg 4^BA"H%K  kqO;;(xǍ߮I GOW&Kp'sT銽%/_- 6U|W|Q/Umamx'winRY?)^;bfG(n+~犗D;-lNo W$ⴑUu# ~tӸŷ!5L>|   @NՊ[464\`}a (szgLx?@ &V\N/=1Jh _* xGNKK^ JyguU.UlY:ϡwi|I鴍K;_qa*o;+V)^vU(>3UΧEjeudwTUkwviZж/vW~b%1Y񒈓rAcubK>8[a@@@(POmߦ~b(-PBN qt>@+PT3(4)lϮB+`\g|;;+b5,_tDwtW^AnG`~Vuk$!   6j%|!Vob)abo3{z^T+Sm4HA@@@@K0E _'M_ ;QA b> }NauR(     @^ NEO%$uH]˴@|9#E@ [ִ1@@@@-؝JBqj"oK$ow ιBjM"Z     @n lz_ _+~:%N& bιrPJTg͢     M/li&Z69lP pέ{Hդ5JfuY:S@@@@@.W3'iZlRN5쓓F\[M}jڡa*     ZͽJQ(a9 !ι]7׬su1@@@@H@5p*JEUŹ!zιMg?|      =JRM;9#[+ {^$9;]V+ao4kbMP @@@@yZtHPfbɰYQ4;UιΗ|M;}^N^wWRR2]ݿ֋   PF;HǍ?4:emxµX !9jS-rjE_д3q1 rHnm ##@@@(FP/K4D/ /B @ArwĖi3e؉ #@ {$[TLwb~0  ڦwmqy>4Q@Pn׎ 6\9*4Q!:_r`)    P&HC+) ʦF^ntF@BxG;2*c-5G!FlCn zCno`-~XE ᵇk[W4I#@^LV_a,/ ߭+[M?TyX  PSsE+fG/nyO_P@ G%Gy-@K^>6VϬ`/;~;K_.}rBNj( PT˵+*C{~1d[3YΗ,j( ]vҳ-nwoip2Ķ8V4?VN->jy}8g^W $ъ}#TE ,9OZʃqVM{MYtzwkt2 PS:_j*r    =c7+(#JlAM.2^&HUח PSιʥnjǖ4/͔Ѐ? )e(EM    olX\ ~GEЌ1Mh&ԣIԥ* lFsqPR4swpxGL '25fn,   68Xs2B-FR-0V 7ݫ+~1R\mS'j- cGBj<>PF@ />,C@@@@ ,p& O3wF9IYg>R/00&@3TA "00D$ Uw9_M`v7 \5v,   Qq~KVTє{B Yιd[Vj^R *&#   @BuTS5jJT˚u(Js.Ui}~RICޒU +/F+Ğ#   @UgjUU3RM;S2'0UU5Ř@[ռ*cV N#\2TG:_b0D@@@Z?ZǕQ2#V~Fx"@s.)hXq] jWeY$@    TF-A nc~~5ZqV|T pΥZ4i|XMQ^jY\@ G]mͰIӭmEK2L(So8͎xUt}nKV kn[olw#N%@KANvY`ӖͬMمڥo|ds-5k oLx8~Wl¼vҶ[M}a' zJJ 5]vm:Ni d~ߟ}},m(Ks.]}\tRPnU~P)G\[tԦ /b@ xj)vmi_WW >˥]&[Ҏ^/mO`.ۮmZO&=W\w=k6kܤe@Hj1PAJTfju0t 8ҭX-Js>+Ǘ;++gh*0e] Y1`]WǎvZ} ]ccgϋ\ȿн.˶3߇#틟g[zumM7mڿni^S;v.~mô|t{\&r\zmYNj[Η Ηg^l?H[x==nng+V.aagjuQqg" L-hvNJw V*xJt[f0t pΥ[֢;_NP3[{D) / QȄȟf;DV5n|_^ݶ7lk~YM5k-#ڭq+y6ddNozl5\gwQg=#)ΗLx>tԁֽC Σܪy錘}1zc'zͬӦ?LmZ۶[m^)ϙW6Q_+@@ ךۗ}>W_ʔZه)[s.©i;5wi1kN}>BA|ވ @V;?լK̹&zU3No>ʮ}3{ivAB)Hgz:ҡm߮uSnu_#jժ{~'Yn[xxq1@rR`my?OpyI~D7[ʆ\~IT-DAGw?V( P/1Ȗ@t6ɆMo/[>1ayYvl"uQ.NODܭ!SRTۃt˺bQ~2 @fӅGŹ#nY"~-W=MQ\G !PNΗꄘdIC\]%VbS-OS^կg6=. i/#=cW\im͇P@H@>}׻P_Uj%%%ӕ;c pR[9WG77ϹOVԵMK ihըQ+VVX2~ Z/}}̞gszʑER,s@H/鳭MǦ6m겜syd8rVq}+s.7SUtΒ@ m4O5k2q6k` 7>%z7ku25Q@R%"RVPMWj~vZ+cS$k-cSڣ86qZyXmYȒU{tvNط^۾n&ĩmϳ"[tvz %@ӦQg_x͖,   /q ("/Wݖ^m2qJD`݇훐FNB R_tLg_4/uƿn:a-6d$RUe3 P}P+;wb\ٔm\ƨ㮈s.. (9F\+P\\vy1     @6%ꅽ:ڽ ܡJb^×ƌ3     @V| {A=H+Bj;:\ 5W1     @F|(wCR5g54?f=fA@@@@@2.@K~+f,eH>gkCe M F7@Ђ|"     2:_RFY4 =?74FO MKjpd{40fwx'8e "     r:_RNZ vvS>P*(~ݧQ2Z;X*[YoOA@@@@@ mt`ީtϼ e?eg%\U8w0:<|3#I "     :ݑ5R! lڡV?94)QJ5է%\%,xo6A<~_ k@@@@H;_``WwLztޟJ|I*     5vŸd"swk}5/M`%mUgQ/\pBXRR26j |,n%x$+F}@@@NΗꄘ+R!eHM:ΝJs:IA)6C01ÓOv,1+M urT](    ad.gxX] $s/2))D:7Sb@jdZ7D  fkux ι8rιr 2#9gRA `h|XB;+x il7)G*U|ZX|f!  U {\udCq6a3vfsW_/sn9B|'}Q?w: ){P@6Ϲv"1z orY9ֽΙ;a%i] Ηss^zwUF'D'f  E)PTg6FnfIK~;^3s%snfmvCz|DDsڵxzHwZL pKj^@IDATʹ=lxBNj6#R3tbeUdʺ?|Yׄ)   P71p|VS'+6]L\yu.y;31 JX=tOfӊI f3;}|@9aƷz}hNjwҿuH).tsݦ/7ғ;aj,@Kj:*ޔ.ʗ9|@8۷qW&!  @ V_IZmh[ݙt~HffvĕGsm2Ϭݼmnfm6vџ`.ꄡ@ιsuA+:Oo׹t [ w*n dk옿F}qHk-ב-O\ 7GLrHGM%$XLz5Q/ yf(52/%DA@@j!0O9~# ~1`3n.X/ovqDd=&;(#set4;OQ{ BL8x܋yO3uN_1Kйy^~ެ]٣@@tTJÌR6y5^syBmt*   mV[pfnִ֟Tͻt!Q4%3vɭ 97kٍ-} C)>Ls;^3Ϳ[ZqSKKR؟:缃e=TĖ[~kCvɳSF a:_*ڊݴn) ڗ] h@@Ȗ@督korLV<}/ٸv6?Ot 59юuoCMe A S\&2{ኝ/(2Jqd뺧ٟV4˺[#fvEƒ%⬫m'I(%  dJ۵!fZn^ogvYOu4j}׋{w|CEֲB97'ѯuǑSv) @9Ԙ{fwb~E3;]WX S\KjG[/mJ_, WZk:u\qι=zF vU_LsR:*}e@%tTɅvHhPQ`>[ZNjiPk~Bfo(*tN}](nM_\)͵C=צWƝ4AWZw>RCinFoyQVh+PxS^l- @1 \dŽn:LznܩfQykȿԼif ϲ_{VW5?4KU=?y(!s߳B)Jlsf 6[-ՙݴeQlsa'BfaX+/*{)C <FUTx熗c}Pkx'Ύ/ Y%^Z:qrUL΢xvP3tb;mb`2i;%%(UqÙ.ʢng_;wF)@@@@<%y#w4wix~tt xwtVV)+~nJ=J]n;^((txGw> O w \xIdtGCeGB?}?$򎬅J+e;Ž1 :9s~QZL4<\٫wh%0cMK'     Jyl6 ?jlw㭼V/ox+Jx,N`!^&G~X>Xlͽ?i޻(uJdT=A}w$ZKgewܣ|twĖz?;^L~vb+VV*UzFc     @ GX~BV;WzCp $ 8ʹwP(4%|IG:QgR>DkfNէ;˞JǔъX +Ө:Aw4    乀_̣ dG;&w+d}1+];n I GOHU+g+(SR>R@ B  (nfh^e0D [)͔x+\@KMRmv~CE    乀_ dG@SX8X];+)H0~fw3=;$)~1P$RWk*{?lXxgLPۇ^u^~x ?և;VʥX%2?^1dGRjʋrDP@@@@\Η*pJ/xg£H#*~F/{()wf/} P e"W*Umkܬ+#5;_NT|;(AǏUAxMaQWirGxN G(^WiӗFqӓJe%3^`w\{sV(*5 @@@@ |#@ l[+J. Fʿ~ǎ-U<̏ E?3)RUj;˽ AGGx^upHxjoՋh;HZ(~d%܉zr(͕ɊQWLQ!%\|ꔨߕTRR/@@@@ |ɿc#@ $w_>v#&\jұ)hx;\+]ej]@@@@ 7z)      )%E4     /      @ |I!&M!     tp      )%4     9     PΗb      @       B:_RIS      /      @ ꧰-B@@@ +}'zYi p֞seKx9WǞ=OwƑV@@@@@@/   y'PRR2=6H6PMW!zl u8rWscetqc@@@\$ʽS۲o\ \:Ti*ιtB>Kmɥ)m۷oIhzaVhg     w1d@@@@@@ ж-duK,#GZG֠A,_>H[ᄏ5n8%k'TN5KemlW_Y֭{Vn]˄Y1     z:_RoZuݟi:>쳑oO<.s?4eիW {'Bx fAU:e˖?lޱe̘1vhҤIA>ǀ@@@@@Zx 26 ]ӮSHZmp|5m7      @#Uc?oIĶÔa{n{R&Q@@@@@XjڍJ]%U%T[ˬ>Pz @@@@@Og* 4$Jzx=,Sui@(u0BA@@@@@.VqT$FՂu._3geJsP@@@@@@ ƚ\`|I[TlU~WE8|/]elWOUWMP@@@@@@ wA\FHtܗ-ƒ fXݡ     )EE3[ _X_i=SIyNx]Ս2l1\1s;Z4}X]G@@@@@ k[h(Ȯ嚱Z _PVӶlZNg}}b.dv?r,|d񶒪*[@@@@@Hj/g M45yeR5W% oG/S%v#^0Q:gg2     @>N{a/C:%w&4:)WհeY%ޱGӃ;jxhLYnP@@@@@P?Y(w$ zÕLjRrѬ̿xCMo-|      77iK] _FӺ*,je14K1e-U/^GBA@@@@@ r _lY]/K{m^R.f;h3T:nqW.     5xM^lxHb'^R.fo(wgC     1ڤoɁmX[6L^ _ZjWz)/*սhtS(     d\XمɚwRGɇI4>L^ ͚iNRQ+A[NA@@@@@ /i.ߒԭpPg3l1%|)     dL/TU 1Mh*%qB1k]>\yH4W(     dD۴eFٕ֔4< Mt{ D@@@@N m)ӧuy@XFJMb.feY@@@@YnvP:^;|eY 'f[51 $D@@@@#Η%\Ӿ?XG>{8u)LO&O.I#    @Mg       @ &zCV/emդ!̒W X@@@@Hw֓@@@@@@|)G@@@@@ ZOZC@@@@@"Ov@@@@@R+@Kj=i @@@@@\Η"?}@@@@@H/5@@@@@(r:_`@@@@@@ t֓@@@@@@|)G@@@@@ ZOZC@@@@@"Ov@@@@@R+@Kj=i @@@@@\Η"?}@@@@@H/5@@@@@(r:_`@@@@@@ t֓@@@@@@|)G@@@@@ ZOZC@@@@@"Ov@@@@@R+@Kj=i @@@@@\Η"?}@@@@@H/5@@@@@(r:_`@@@@@@ t֓@@@@@@|)G@@@@@ Omswbg2'fqYY|Re#650T]x9ʇJ⥝md(m?56@@@4L]4ٱQN:|u_;=m̜ؾ)y>>~dxIm k~c7~|@fv'k쐧w/g?|uk; _inyRNg@I`6_m7ok>}F뮗(th`Ǎ^" #uMZncajaG5}o̴)εhesG/.Ƿp =0o\$Xe ?kg`;?z韊?ɿ)FX2ekW؄'f'2ж?l*.wxԞjlkܺu>:x;kMSyRNJtc ;ZbK0vQ{h VQW}   6xVn=ۮz'լ/Z.zƞ~vgiv6gDk>o\l֥UWf֗6uȣ|//|-]D9;2^L?8&w:n+~M[tF.]n*q[zYV!WVc`|;;ْmzEXQ;SްNm[]+r%`Ҡ96mn^m ه*W>{.߳uد}ߟN=uLl3jIR*haiԳN=[Kz4-O׳1\YZl^b'-ףZFW_3m5q=P?W1'(%CQ(MHrc3\\LRR=a?- /NA@@$Dϗ.4Ke[v~w_?sK03ŧeٟ߻=Z~voX"?%vק: ڷ@dlV[#`{s+cCpL"C^Nvd|VYiyz7q:]wKcڜelȮloy=jW>T>,?D./UKi^-W%7yJh_2~ j=/a/lk/!oYwg{޶}rVQ]ů뷤z`|zCƆ_}DZy'L}[miǗ;]>`wVc6TϾ"l-D 3yEE+~V'GhYyŊw4M>;vR[F>_9S Oy4ioΗRs>@@@T ,ӷG]vߡ7mnӷlҢhf_<̺| n>.vo&ۆ6& &ΘONfb"f:XFʪ4vx1pLrſɱVWA=EΤC:nF_=7m̑?GFzё׽oFXvM䎰u٤ؽm#n?T,yইEUPdcpL'ŏ3؋Kqo[ۨZu׻쑋LkQCֶ:[ѫ~WL ~6Oig\ߩBrضi,5/8GCm*S3;GF()G(WlYnhi;V6TNQ(d J|}A﮼L(جGnikM٨M4owxr[o㆑`-'Ŗ\e+>&ZV6kG-v5?{'E}qw^D c1cCX F]kT!bD Evٻ憽~|ܙN}533Z][(>q뮴U|+xsAI6Ow?Q(.*|\?Ch X@@@M>}q~Xti^pث¦TjI螺X\NΌyWS|?>tiVD[UqJMǤR6u1QCW:ϭKWr |m ȧ3`ƅ|wS1t UgY駾t%{t3B/~tO D\`go}_WqjI>Uk-a8|svllj[9 ^^".~S6PU|OPwrlhO jӟy'}o? }^=|BXݒ@WL{iNa߶-Tjsm^^^n>-J^g赯s~U\͚GeҺV4W[6\j 3^[16Jܞԛy:Ӹnd+@IDATav7N\')):o؋$uѤ}0B]ۅݟvKx|W[]ճuD{RK ޙT)r2G,h.ܢ))E?O1\cMyJO: {\^W}Ny%}mܲa?u۱cB ' C-/9&¡tSͿHqsm#M⯸gw/mu^}۱.?ŗ>Ht)R3?39U?~R?jxBC@;"0V_t~[@s(D>wfuyV])>MuPvcRmlDdk1ߏ ~;߿rSW&5rɦ鑸}io&?xpN$T֦{{6}uܶW\Iu8zV6sWu_L˗FMoIUm?ya{4 V+:k@ϢO)zoL_!m[IkޱIpv޲YSUbg(V+7)|n(ZFx73SS^G(9J0ָUeS*>9/ka&NN9Eg+5]ɣϣ]fW@ OK…wQ\?h77q  P@ߙ> Al ?="}\t-Vk^ӱHGlp& M*ҘxN,u5rEgٿ[m/j]"|K_rR-Sf3>ö"q[ZߡzJ>.|9ו-懆O~lv{\y_/ ~ޤuw4raTWʶ4J ^yZVc9.uy+G8k| {qrOAʉwJ`}R&22?,@YܪܨЪ!64@ Wӎ8s\Cu]);?IU"O$\RVP|R-ImqmKܲ^Ozj}EA=|<ヰaMx½v zQO餜|$:'-|rśʙd%-qe6WY7RxH6mHUjf ]y~YK3{~fKnƐ'M e)YڲSVvQQ3?;EUn^ S+jwjs򾲾8ˁJ_-Y)+$4KvD^gjmPxB^5pfo7eeWofw5s) 1 ,6WISY3M/zImՔZJ+2~8.<0BC@ᶏ ?-^?l\ug6n誁c6>1HC'2Q+Ui5O5ݭܤzc2}Q9@V, tM, 9'Olˌp|Kaa.;7jv\gьå/`}'o-NoVR>VjʃcrF _tg'RIۡشoK:Jl|&<9qWL {|W7m]9-4j 1Fqߵ>k0PiM+(#5V>E@6? hF>)̝K ӽZcӕϕqrR5͒gGO).RxF%CЬ*UKrT_((ǿ"ftaOfzuQRVf((/)4@sNoo|Ru; . yfzWaSXR3D&r 2^u}L|K,{xb;ʷχ(tmӭ9ؠY-m .ZrEO& m(\vޏS^<:u ~@n_CWߩj|ZO]鬏3^yMϤR$1˲KU4)k.lz~Рaя|ˇL 3zzXb) Gtu'736Ub'}V`G|Vѽ:hp=oҫE/#a$5a­?QP/?R[n!cůQJZ U ] vEWş T^WP>V)ݾW" n-"IU{3^ܫ,Rh  P|_p`u/Y$9*, >&AʚFXث~]{쥫9 +W[C0R;!K{ L7%3aZ6.}r]Aȸ8|`ZXMdŒJ_:ʶ!yQyʉ>[M+K`߷7Y-mxZDLYL-g&o٥itliZum(=* t z;fG8VjWs)J􏭢?Ӹ`j#C*QgL _wVX4siVmWn /P+V8WiP2'+@ +s˟@yNqë -*+que_jŦl3rQGT^:oQ*( @ \ysya O}Wþ| #Ρڄ-.xœd[by;ּIS_=v0bMJ;v];) Ǒl8&_yXaugiQ1rI x%:gFBv"3>IlVݚRx.wi^*^3\Mn/^x.\O) ^ex #w<ޟu\~Txh}/)+X>tjxtӱoƜ79,d 6kвK}` ʫ.ϖS\ 9S)38FRe\CqN/ټ*s+_"@ 4_ƾP|iL[Worol_>o_ Y,бEеu՜KWr |upgGωg80#>¨S'o!a 9||¶7NUye1]@"?J%져@y57#)ʍ''+o*. FMq&lc&nQ|I*z`5o.}7,Τ&ڮd!fK h,e̵e@@ M«=NOY9<\ia k^ztd.whx'Oy}F&6T/pOrV dn;J?挗+&#"мQΟ7_VBj.1!K@ 4nauXǰͽCæ “;|wj3 4iSt߆M .M7 o!tݵ]g&oǷ0|uόU=C +rJ-0Nx-JSe{Rp ;cMyJ1q!6>{mjI6O3j[&C-~b/Ce5 @ sSz">(c=Z|⻼JcՔhgaNQ|wX #?I^l1ߏ ~;߿r/|_?+gߘ+`=8< l΢RfuqLޞz8#ôSջz֧f;e j^FpLƝ"o $ &.yG3F \-LkJ _CB+^$ /\Gf,MpG]0?Rv? n1G&W2P =. pN7RxƂ@(_iFua Bv}Z$h̞YcvNN~|v1ICf&cxqwgW|Vg2PC*n"LXY%,}Rѻ3U|;:(h+/po O"oe'dF-{u`OpCJuݚtv\!١-۾3M嵽ÊOSWvr j5BXG+QTYuשþ9S;([++*)3B! @N9ᯯjpxcì_f?L'[] 4 [w!U*ּ.}a6)crF _tg?l% ! @ l|&<9qWL [5 kZ^WߴUwШy?^9lyEРqbUYZ1 '+-4ʕJ şIe)[]Aemm4r@#}ÍXkNK_\+)>QOR_ROUzk/ڜ\[|o_2*͘}ŋOB_ m@}׹m:\h׵gQD ?C+eBC@:hPV¥ Õc. O~24i4lu-.L^<]{?Nypׅ?*nrrë+Zo]Qŗ7 O߬rr2Fq eB)R\:RO9Tqs&ɏF.lOo/Qjڶ\Xݤ Wy7W=.F]tP\ҼOr-ٮQUZ+/*(5T*‹mʚXE۱찺S>G}'*^xT%*2IN+ 4՛W('?3+xϯuW[=qʑ! @/:տK.)*((ozz"M L[]AL VeKV,tcNlKH])''n| u,ۛ]i]Y0eq3aL/eaò+iյihԢ4,81~u*qugK\ti/.^[(ezé֠ROλl;k= u6R'i؅G?k;R+(n^gЫܠ A6Zȏ? H^qK͕ٞ5וd_&G)nm1ʾ &-%C[Do+)z;QRlÊUyۆ*(w+=ͭ2&ES護QNJ1ɛYӕZ{uUF=}8  ]t[fz"&Ǖ" P5[?ߥz-Txp5\hY|zv׃֠mtAQt;(/+.`|쨸@x0RU.D?}~qO& Whx+jUP*RƅZʙʞ/mZmK`en'*xϊ-6W)nE/n$_*"+'(*'+.vmk8hW!W\\sȤh*@@dOUS\xUO.W?k5ş !    @OoXNyIqs5QQ|U%|iʯɎ ^]zVL;xWWgL4<[Y-lŚb }e2BIY\Hڠ/용cGMs\!ؠۘ%t V 011Jh]N9&ŮMReh5E{ʅJC%sCPŷ@! @5nh?sl'tRhrIylSeKTl)dz߳y};:].96*ϷhC\ v;&"W?.xVseTq+z$ /6Y?%G*]Ӹ08z)\\isa!VDG4h{i_r‹۷E/+UC)6&FGR ]]m[sMJE۱E - _VQTdY!V}uY=cy7r}rɶS0  P?V_ݿi$1ɦ  u#ezcK>ѿDq{C=*.DۏёJ 'O1VeT4I l6nŅD+ dsAW[E:JvɟNOնRa&vo_yeWƤYcJX}+‹\0Tq+hdF! @*$olَl9 ې-ۑ-ٲpl"[cR"1)P6xd6V۱lȖۣQX\y5[W^RmUNQ\k\l|㬡<90lٞmA.My]mWZ)Žoﳖl~H~>9QlF@;/(َ4]77Jy\4?QkeL*XoN_mKog4= WCY[(}c" @L@R`f˹Q"vcRrpLx8.E%[O8&Eǣ؁)%Y3-+*ȖcR}Ăc"q(6T#_U͟r39RWlS%~r uⲘ<;fWx6OɏO^;x.@@@@@(CIRcX}#ul754>      P-z/~2~vZ .=ďoU\dy;@!     @9Io׽yx >+cIe     !G>M}1=ݹ'pviSVߑ!     Psm?#z}׫YB=s`P_ /Ѵ;ֳ}ds@@@@@H@/-݅ vb\@cJk9o~R     /uz̷ 巀hkM1{@@@@@|h*7Ik`aJ||i     *pv<~|DbߥnXkkނRKc@@@@@<8_? l)񯏪_Z*x@@@@@U\LM5FjFuTs̆    @hX/DLtJ^PZEV6K;)?D_*EJ ryU&)4@@@@rN{!e@#udn_ZAR ʨޑ u@@@@@@ Y⅗@@@@@@g+x@@@@@@#5KXV}1́     JQNG .XPPp5p2dcշW|c d (y}T&E)!_խ>kG@@ky|ȧ‹L<==WM_c^n ~w^uY   pK98V\=ZRuY# f]+<5hP߶m͖s0詧B7XԱZ̔\G}B@@z|C"P+gj)Ėt'8Q݆e\M|    5ʗ.w9?ɝ||H;}QlGjX5xs&_жmj` =9   @pK64 4rPD?[Ã# "    ԢW"&B N6mۮh|FZqᡴldY1-"   NrB`mEvcUߣ9w  @PXpKAAZD  .PX!viY Y#pUdkj8  *@%W,  @q/Y~<)}l޳5b(" (/xT'@@")ԗv"Py֚okX     i/i@eԱ?.mXH  y$p  d\O.u+_2~X!i~Bl iX     i&X@&%}=QJ@@@@@ o'iG G~ _c@@@@@ _҈ˢȠ@kK>F@@@@@,@%, t g@@@@@ C_2jH@O-/X     dV@F1M×EG/>4Y_p)UÞ ^^뮛]pib|ᱧ^ӾN?K-\|y[özlce##O ˣO<7wաMV~C8ttl;    @ p~fGpGkX9,gwgC?f UbTep;-sNoWQ\    @9_-\mƑm~Ke7MwC8DW\̛ t(((H9]uVr—_'Ӂ+ڵm6axQ+@@@@?/whە{5>&h |ѧa->{uu _N&aou#{cƎ r<ja-^6ٰJe˖Ͽ>@@@@OKs87NnD/ƮnCC>kҥ)DWpd8h߁)tYۣ[p9{a睞@@@@ (Ag@9NIW_enGRAG&,g̜r^ sVf6    @ P|ˮ_g"{/3;epQU{Ym/N;_Y{J8!)Z*+y@@@@ (d_A`ud~FW|yl'1']|ϥzNlD@@@@(:D׼#5q@ C3gN9뢰V{kh֬ij@@@@]K!4Z=6@@@@?I{<zs=wUWD"ӣiYˍyT߾ʝP騬ԷvHlťیf@IK NKW&Vu]>=~>@@@@ ŗ.0_ e` cKZܮ/#%6Iԣmah@`SO 0Ew_W=:XSMc<6z#MD@@@ l R.ZiB't),OvdD#%Zgx/[hnV\Dpy;e@W_V zw`@@@@@uQ I 橍B>S6`a HRoXԭη 4@l'   dX z[ e@e _('+ 3?h묨ͩ>x:5UJ଩"#@m >@@@@ 2>VW?\C+j+G)'(t4[l]8ۤ-" _Y<^m\O,#>߷ʮ7>    U ~20+cY/R[+iT:җkwPnlDJ5u;e\R@)ֿMqk| yG[ekԠOWvFz[i}]ot@@@@HOH lM)*R'x5Xom Q?/E?2Z駾Ί) ds핢Uzz+*.Z?kyejyX    @ P|ɷ#^vCӕ?ixwwʷ !D J_=\@il$/)eʎ1/(eCߣK*o2O񭾮T R;IDAT(ՂNSۆ+ow1eBgʑ[mh@@@@ZhPKa1!\|؝^-ާY9OJמu-).̸BF -nBf7Q*?ۨb7J7g(*\چ'oqerrʶJmW!    P{{’V ܢt]zQXX8M2|gӹ`vS:*ΔYUi*~Q져(+[@x[U~UW |ʖ+BeeQzcE7˺jmH*uԾa9oUJ}W7 @@@@RGRx1_q'`N''feiH_\\ ?ͷsa rW¼$ouKeZ3M< dR<{W|yR*x;1ޑZg1WuVh    dŗ::i    dŗ,9,,ْoư߮i4t V|Jr\yYI}JⶭrۇTO9Ws_<|l@@@@Z/Y{h01z/&E\|˱Je*zuB|받+|%M',eR-6zozE}K2QP|6h    @(dU HU|b".\U*-؊y_i]_\pi^UyUS^K.7>MY.6[Gbͷfsql-݊UvES@@@@2$@%CЬ^ fmbz_]REM.$`vzom ##!~+W;\%*"    /af% @ L5Ί]Hrš!    P/(ԋF"  8  dHAj@@@@@@D@@|(, iW@@IS|ɦ#¶   !PP8Lm@ @@&;\̗15   xȵ~C@@@rT+_r[      P7_Ɲ"     @ P|n!     @P|w֊     9*(GB2&P@kVʊ@i!!4@@@ȠWdU!@ PxCˎ!P\'   dXKYWae ~>ad'@@@oGEWԠAa~۶Ymܠ?=T=qbVg_my`03    @Zl̄Px)pK$^F,Z[59[+   ԕŗg 3 JeAAAoqT(٢=E>~l=   ŗ=9     AgE"@~ \lF^ k)gZyY8 @~ |߫qʭ)Og׮Z|et-1-ׯ"@@WT@@@z-A<=*8unJZK+n~.@@,@>=@@@@@NK6@@_=rDmWz(nm=igTxr5~PyBKV]׫[[6e2[yXD^yKEADi@@3.5 b   Y#p_!eMH&ewEMJit>e3ϗY\|,.v!ŷsrQ- ϨqŅʙ睠@@/uJ@@@:XWk'BI.Fj\'p9L9T_}V[m4h2aVs   *͔ʫw+bkm{3^Ad8>bMke` p3wHG  YNYdɭO oFbN(~5op#χٿ,.|p³odpOY U9z1'w˭_V+R|;]5OUixJ%NHr_YSzpn ;Mk|~YIankKw2^]rRZ#7vՖnOZ>S.n'ʣol>#>JUP$;y~׵sb'̞'7\?l!G_SN9dϰgu=ڵ ^7ǰڝyI\qΑuڵNLo ܤ~E SUv; Tqse ^-eӁ    ?܌g8[U0DW̕p|[UXزlwR| xJ_Ppi䇋rPhI^{۬؆0ganiw3}z,3t˲qCQZD .fልsR!g'    @~ P|qPYl}j'62YY{~XزlORR4fW(eeIBU0L7_a ;v5-d(!NϮmZm5 N߻oq/p4@@@@RObV\\xq&\߃hg-3O5VTg3Oe\?gjlbomO~ QTRhy$ЮiQq_k_3o?)HdaM=1+h    )@jzMިxK[o-6Ly\)1+~וt﫰OyerI\]5JoK _uȓIWth4a+z >f. V"'^W>;R&Lh    )C4鎑mr6B#fNS[~0OWy|{6wԩ:f/)(*Md5?r0qμC%0|0];mUv5<7iJ8*'aW 3_{-nB@@@OOקmmmztیxzFhgWr鞖!dY@hnV|KmǿClrW 3$,Z,\50rⷉ`]%S=Eҡ:w/'>fx\{cYs m34@@@@_ eoO7?Pyd-*X FmH 6&>/ikijO`Сѥ]?2 7sVM5ZFkuنa:Zz.Vl B'U/5iYma)H~&FߓSF@@@nX}5iOE͔*x#FooWƣ2ӤmRɅ**UrQLV_ BmR_l֨QXC^L    _*>6iO *1m:'yP _\ do#wŷ#;I>     &NЖ{޹IVkuV\rV< D :jSQlrCjHS&     @(MRo]Y)Ǖ?t(=ZjJMg#幔^\pyXZ]HC@@@@Ȱŗ[U= Pn@S}~@]Mnͭg5MN     .@%5>=[WSUy^k{RL>G[U 6fyMt1rJÖ́     "@%5un2{;*)k(o(uFXymW*T}X+^RP~T}yUԘ@@@@'eW<@]ڝ'}R'}p"mhe dm3bk&eR+_j|[9,eŷs*n**x     >/%i0Zpz=4^o}l|}7׮sOIbnV54@@@@@4 P|)mӕiržX+%u uvFa:qh<    @}\0k}),, >|̪fެ2s     X^. y'Q0UrJ̒ޝ[RgdQYTEHD%.A5A@0X AQp2nl HD"bP1Y 20y?8M5=޹9w=o?PgSfS͖kplv}= 1f;QٟŨ|A @/6[nXzk\\\cN94QN'5SLz|f]ɬ/ˤ&zПK '@ @kfq~ @`E9k 0^R @ @s, 0v-:9] "@ @ @`F| @ @ @_%@ @ @H@eFlv"@ @ @P| @ @ 0#ŗى @ @ _@񥿋^ @ @ @_ff' @ @ @@ŗ.z  @ @ @3P| @ @ @_%@ A~t|HIm,TΙO.ƽN^Ϝ&oI?ι<1S?$zzdlymIݶiIICKYcpge=h(O鞫%Ul?Mj3ݓٴmI][ =jߵ @ @ Z`}..0i_s&Ui UyLRj$^a&|01ӓJ#@ @ @3!@ SKT{JYI*R}MlTݎ»zewdzMROԼ=%?T[vM1yuR*rT{&{'/MHꉑn$]SjVJ${$uv1TRyGĤ,hU &91k/IDY @ @LJ @!^Tq劤WmU"L*M%%M]!Yt`fyNJ ]֛yq?n:W0;*6\4TczqɱI{z^ִr*_&/j:3ydd6\Ǩձ  @ @ V[6 @TZE~o VO)BF=]Ѵ_gfa ͲM==SttU|i;z 34%fREiN7m_ma?MI~qŗ:K#M^,M#yM.eQ#@ @ pOŗ{zX"@TƤ 5vfr[RIիɦӚɑ>;Чo&]ngz"9I=?uU[YNo-5׸Sͮhm;ً3=c @ @L(PF8=[|'iZc~Td6"ȓ*LTѧ ZJ0TU[#'&6^#Z+5{eɮyRvMqoQ6W= θ=:{6/Mj\_MZ/LN4 @ @ xeR+  03r7]|Weۤ^kUMU˒͔z*QI I%oIqr2dˤHJ%ּ=YNr>#o%Ks@r;M3oLoJ&j{5o볲ꯧ^OMU&S&mos;U^6ʹ SNoicV=%'%duI^;zeIs>'%u:n]I険_%uݓz2gE}}gfG%zo֙ @ @,pŗpespuUBT-ٰMUdr]. @ @V6$@ @ @XP|9VxbS==l*J܆==, @ @ @`|c @ @ @ V` ŗe˖]1Nsq=sqLzR`A @ @|Teq)Zב9lz̦U[3= @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @4v6'@ @ @@{u  @ @ @!pp]4) @ @ @@CW'E @ @ 0Eò]Sx @ @ @Ϟ㌏bG' ]x}GoFD @ @&Pmɸ]Y窛u{ǂp @ @ 0Kݤ5iNGd'8:"i\/ܩ @ @']&UL/+PO$rw~ @ @ @8 "-+^6NZZ /G&bKwy˱Vpq @ @ 0RhʑX^ /G'BK ] @ @@=SM*9S:2Y TI;;:  @ @ @ v>ʱ Xv!0L*|*~W'Z0 @ @ @`2֤{v st-_:s* @ @ sSҽՍogwsE?YYv^ @ @ <#37$ݛ_NFX`WuSxNG @ @ <>%go{$0|MsCws*j @ @ @`$Q\toj<} b /d*E*&@ @ @$êbKSEG損jl\t$@ @ @y$P;3]a4+CजWߔh @ @ @`ȈK7kF`.vɺzxj'K{ۼoB @ @FN`Iw-,N4BI{xz><}{//N @ @)f4w$5dD#0LwYz'tl @ @ 0zX&x͟<( C`Ƥ;l'rL @ @ 25I7k1F`_߷kޠOx @ @ @`e lovˁ߳ @ @:7]w'$lO߱eylj_ @ @ 0fpM7ǛoQ@X4ߧ =: @ @ @ &WxkҾQ^%[% >;tOoAlO @ @UIy.Œ}`ws.}'^76 @ @ @xHIy3#/zFԴ~eD#@ @ @ Rno7K k1勮"]}iMyo @ @ @13׶$in7ҷ_K3Ҥ3?=  @ @ @ <5tEҾ^w$^Cnz"ʤ]4޽ @ @ @N Iz3{ ٝL Ͻs&U. @ @ @>䶤}h S%{&&@ @ @l~yҽ^˟NN%\npB( @ @ @SX?ۜtomma<,w @ @ @`pTÛ[;0Y=Wɹu7Kve @ @ @`$Oҿ +A ,W%g]*<% @ @ @Y IH7aLzzXWi @ @ @c]to-m~ ??7~ߗe @ @ @`tꩈ/$ݛ$?[0Am^%lK'j @ @ 0vpO7k' Ci<'iOI U6,i;6_Kv[ @ @ @Y` ܤ);'$MI?ӿ!@ @ @]`\yIB'Y'-Z3JnJmA @ @ @BX %9?iC%Ob'vK.I6\s`rD#@ @ @,"LԤ_qwvddjW&Hd @ @ W`~*9if}8s1LKY\TJ#@ @ @P`l7I]]9:yAV2_[NM,ߖu&& @ @ @`;&''w$MbiɨeMI&ꯢ1| S#@ @ @ۓ uÓ'+k¶IޖߒOh @ @ @ $>,&[2۟<$f[;ZWd5v%_N @ @ 0^L:Q`\z~ed 8'Wodikz{y@oQ-I6KosVg&K @ @ Rܜ4O^dD#@ @ @`Wذ Cw]cnNM@T4 @ @Qŗ` Wm4Linρ/J~˙k4 @ @恀< &Iodmg,S,%2UpW-I4 @ @ @ @ @]IENDB`concread-0.4.6/static/cow_arc_4.png000064400000000000000000002733251046102023000152620ustar 00000000000000PNG  IHDRQTԛsRGBeXIfMM*V^(ifHHQT&! pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATx]e?B#DE ZPoߵذc !*u]۲Ե(*"E\,b(){'{2srg2Ls~>~~ύ2y4 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @(0wT'JN}dD#@ @ @ @`֕Y{ddZ@`x[w8[zc:W}=/zxtrM3d"ۆ9YM#qEuo5 @ @ @}'(=/Og?%yAROF>"6'tIgێ?0~g:kko & uO<#aG(6NKG}! @ @ @? Gm=AYOFn>Z]O+u{ } :[OӁ?҉f[=9lxGzlvuɧ$?H4 @ @ @}'PO6|zZZ^[4jrϤ7%TI!鴹bihkMM&ՓԵͤS"Nl+2_}>IS'OOڭYR0ywRk|_Թ>c:ic^ISRnIT-^՞FNIm @ @ @I*𝕜=<*VI=yUrAMR5I~TI}fR;>Ul|OrrRԺ*.jߑDmdzn-!V O _էg%ժYk(5(dqr^ȤuD}x;2KK?9'%~4BɢI2ٶ{]z$UPXRF @ @ @`Z*vREzk^T;(أja/z*Փ /ȴ nuUl;*sODJN;33UجMRש<=]RVE$Udi?1uȴorkt3SǍTDuXt,EQOָ:-~Ԙ;Eќr;nO[שvO:2]eWEj[MEΤ vw_VԸwhr?kL֤L6MnM.L:ٝimڴC{eY>y~8fl*Vi,z,wJf^<շܾd @ @ @4 ,ݽ*TVߩim-" ɚ ''M[ZN!ohSɘVa.k-t?wfI UXZj Mn~ťz"svV6ܒk}I:.mu^td]^?:`xڙos:ӎud&,z+kי @ @ @V*pkz;Ez ʢU!"c*utZs&f]%ig\5= FR3Y9)ݓN<-Y2ӻ&{g븣zervfż{׏*V sfXus^nY^'BF @ @ @`X>UۖɢdI=uXOܖSM{CUt|PRI`I=Y$k{eڷuaj}>WT'",T'էw$էw&ժY/NJ'>jI=mYWV1Iֺ2m $_TKʾ3ULmU\=)yV"^=YEگLukvB^Sfu[eZtQ7ʺ$ݡIM= 9'ms*"wI7&W$ ?LնhmlgǮsWs:~,X5\y*KvKũ#:׵O 3ߋ:+zLˡٮ6 @ @ @xyA=ثk[ԴӪxvgaxfb]Lb=QzdN+W?Ohm[k6ձ;$lk}]n9Vvt{=S: @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @3czS&p3gΜ#|%K`+.O @ @ 0L]f[nAk}x;t @ @ @`Q|&\n6=ھum @ @ @MS@ @ @ @yz 鐞a0 @ @ @`<:NE @ @ @PD @ @ @@E t* @ @ @f"̿F@ @ @ @ (N S @ @ @ 0VC0 @ @`eՓE%ɠ2ymF @$ qٙ @Hu&&g$\ltډUgO_O?0,\"9'Y<;ȶaNVٷLI @PD7 @ @`2IT!$_N|3鼩jn+9fǓ*)1ݤ ſLޕ|>Yh @X'QKd @ 06Hi^HTmoκ'&g&L:m\$N:OSJM/& ǧ:ojrϤ ҟ%'&N:lᵯ˴S@XxG]rJrurSI핅'O^)|6'$%DI" x~LsJ @d\9  @ @b矒o%G$UЫ_=Yi]Pzr^kɶɵ7 O>ɫ*FV '_J>&;&KnN<4#*yG_T?R_65aUvrerV_\s.Im>I&3M>x\t uݷ$'&IMNi/& @},7G @ @ <)]+_'UDRO\Sgg o(ypRjJ k-%BAɽG''%ժzz򲤊u*B>${RI=9R[=Hu?.3k%Ubj$M@Zw%7$ՏsjտÒ$ժ[ՎݙDRmuI_TԽ @cE>9F @&H9O=- {&U{PRJ%ϖz22ErU2/"ihu; W$U||ieɻ*t ]hO>j1dzI_TW|rNiL]oujtnZ˧$LZ=[cm{~Il6i @L@8'@ @&Vy&W&a+"j=RvBoM9B`vqhПuN13zӟ56+' S{Ek̟lZ=*Ⱦ5yfiR'J; 5οvV O:iGj[{&w ъwb @@?] @ @`^s^V_?5L˦ $NVdߤZ=ZEjq-ZZ^W׭xVO.JJޛh @>Ml/52v'0G @ S;$%'$ 屴Ͳzɢ $%uN1=[= [ݭjV?m\լm['M67+j oZ՝~T/m$ߒX#@ @`%<x%@ @3L >Wq]c<ٯ2RBXtieVEE]rVsݒ*j @ MQ @ @ @Q} @ @ @ PDma%@ @ @ @" @ @ @Z-  @ @ @PD @ @ @ @@K@a @ @ @> @ @ @h (0 @ @ @ @@g @ @ @-U[f О$@ @ @ 0DĻjL @ @ @[,Yrޜ9s^z͢E{^ͮ[oя~\uU{llB{<9 @ @ @`Q~NhRL<0':be vZi:f 7ln _䤓NjN?[paҗh}*{p  @ @ @F2Nܑv]שe @ @ @})]@"OpOuޓ'NG @ @ @xd.VOprVuN @ @ @zr?&毲$u^9];| @ @ @)jϵC+ߜI8ekVQuN4=n*;Z @ @ @ Hgibxh @ @ @ 3Ntϲ}psٶͥ @ @ @`OflB'O}rS}I @ @ @ 0K~qc Is~0%K?f @ @ @"ʀ0&_>+yE9wнInǹ;[U @ @ @ ЯHOse 9fIнsVo;}rz @ @ @ffc)>WOַϥ @ @ @@dLc)R#9 㟓k#>$ @ @ @$eySϖ<}rMs?] @ @ @.POVDD+rwL_XN? @ @ @%EzC0NAk$#{yg'@ @ @ @`*F.|7'*{T5 @ @ @[rDwY izxX.{a=zF @ @ @rh *+|=.ϓ82]ru @ @ @f*>^ @ @ @ 0M?逍p @ @ @ur9uq%@ @ @O>/|>s9"ٺ5,Y 0 fc @ @ @VvtA)Tkkc{gm&;XQ]Q1 @ @ :إus(>kө?uԍ֕ @ (N= @ @`12}=H$z`PDb @ @ @Ww{o @ @ )KMN%}q8g_O΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]@uV$@ @ @ @`(΂l @ @ @]`ձjϩЩ䌿  @ @ 0D[лv[pfѢEwXߚ6v0馛?O_<ؓ @ @ @`<:c%KΛ3g\sM|=ܳy#9wO_({ܦʝaGw̖ ŋ7gn/>o{&/H @ @ @$j̓qP{gqƄ}]`DUܬLyWרkuS{~eq^@r)G}yڕ9c  @ @ @V^+o8agX`9ټ^}_?ӻ׏syI[nytg4f'Lr'lީ|>өg% @ @ @4xu{\r;b85n'fBb®5˧%U4:E @ 0E7\4sl\6/L 霛~(]Գ?}({DPDO–O{qU=Y5X{rjPWsY$@ @3C?5G4wft_/g7~ru\kFm me}{:ypW7r=d?8($Л^WQmV @ @ @L"$/>?NwaW %I=|S0_%Zo @hIoM{5I:Ͷ4Z76-]?.eޡ904͇4u/hO7477ՏO4/ۼi[5S1ɯ/[f@=)7\vխͫ>|as6fk7o?=_fgy[m{_{wXzQ-j?[=xmqKMs~Q}Ϳͽf͍7yaT4  @,PDF.j]nkx_)t{{-{Y\=^L٧sG$w]p,׻ٵ" @[foKe4i~=MsRļi0D7ǵӦyMӛ) MsFYiCilv}$_|ivΏO>i6K|<.wiiӮmlfmK'+?ώk63[W| '^yh5u;|}'\`ݡoDV7l0#S7x[c5lZs W6/8cn @` ):zRv[w'$7sl~mtm{ǺٰkK'D~X }qx[ׯp @ @Ju4OOkYu:Tpr~Ntߦh:gN?i>1C7vo3^wn=wiݯi^z>K59g>-pŷ6'vv[|[W4.y~4>izS6GCyɏX?O.n>+w^mW.-yz)W6}KL77yhte }xG?g!כO3\ENMc٣k[.Oo"lzV8W>}iY{9qf 0$*d8|dڿ~>hAj_ȉW$k:f6~3;27Oߚ,iDtY},- @ pCO-]^oxn߼t]|sW/@Jk.iu7jUw|m?9vrYc1l(0o;>Js54ˮ笍ݦ߽-nnntV+k^7o{U @W@u}Ǔc[ɶI2-O#dz- +pڷͭnZY>*}K6{eUS @ [`}i34Y>۫H6yGׇ^٧RO]v,ZZ#f 悼k E4֧e5ˮ^ܜru ?Y5ѻozk,vOgo /w/s,|k;-еhUx/÷rCսÿyWenL̑xWk#ofUQh-?4iw;$+\ؽ2 @N{4ͽɟn󻻻=ivnsyM54ܐ_+{އuS?Cդ(tf͇}?49ko,ZwjSY3/x†1߻96&|䫛pC>M6Xuivߵ/fWi4_rUR*~˛=vYy=w{c|uΚ4{fх4¥&6Ozz=#ޫΉ:/O 9YI=zDiY^#@ @Y,0gރ?rW4͏g7̗}iV~Ro4RL:OY_sf5.)6PUDTr=(/4w?Q2Tl6h+ަy/jqKdzs>qåE?4?}q']u•*>^k5{ }u/]ЛG->yRHsU]h>4k1߉NGL  @Y%~Ƭ c9xodUkƸDќ9i~*jZ>isIM}M+dڍ @&@`Kڧ!_dГTH󆊫RۮJɦ}m5Mx[>lLrٺ?kߧ^J55/nȫ}|fΜޔ.^Ҝw-)luEI׹u泖t~֖ 0_k1t~XtPGiG.gl<3Yn͉,1bٍ @"07?Fnk}GxYj{͚fPk з6=vG,V;.ts\ l @(оpO+j/grVhwsּI#@ @ @X"YOw^=-kj#152m封4hۦ[ XcN @ @u+v7]C8n"BN/3SuM@KWKֹsG]~=l$@ @ @hQWC/W{iz.iNqZW۴; Yb/W{Nzt @ @ @`(F/l/石tXeN‘=.k]f^>MPTrTr8/7F @ @OiWtfzwp~7=bΦ˒wZ,ofo g&OJ6I^|t,;ۇ @ @ 0<:;v{v'Mߚ=Bա~i7#*NZ,fK^l<2Dra ;ll'@ @ @FEFmI֟.>y@rx_ۑ[v ]M\I=aURE'$Y^#@ @ @v]'*V|-FrA2ozpז/0̪6*++wK: @ @ @QG('_kS\vL4}&*f$M:]2 @ @ @@QG<&\13jfTob#I X#@ @ @Q>}̙sDL^dyɁ ,8a2lu1+ @ @ @`& 2YjT!3YV2H}lV=u$L  @ @ @&:@m4㙊k4S1d;?㙊k46r~ @ @ @`VD]A @ @ @D^:}d̘# @ @ @ xuYK @ @ @rEY0| @ @ @PD] @ @ @\@u  @ @ @eQD @ @ @,PD'@ @ @ @`YEe=, @ @ @ 0Qg  @ @ @XV@uYK @ @ @rEY0| @ @+}z_W?B  @(ڏwE @ @,0/ޟ)ycin֑ @`V]L @ 0. q"X-/KN6jono_3d @`DC @ @N@IDAT3U mvیLB?-cdj]jk#PŋOɜqDqL  @ @ dɒ̙u {lS~` ۯdM{#/̛76S qbCICr$45)?׿u:4zֳڪ/>ɟN @~$j @ @J Ŏ_riNދ-ZZ@^yqW{\z:uל+ɩIYs2BnN?{ڥ^Z#뚯}k7<YGO]' @ @` ,X b>]ҏ?%w_=[:}<=}U{n3κHOW1[d~rIg4ۖW\vajY#@f'Qg}6J @ @`v |0ÿn͙~Ney^׺Fɫ^of'K j/pANӾ*Zg`EF @3$ޖ勺bnhup̿=<)sM@*yQ2TgZ @,:7 @ @Y/P'~ua2gu%YޭkL]=:(랕4ѦN\]g+m"xC @M7wm.n)UhKZL٥߽cs6%l=ŽK: @ (M5$ @ @@I%,ֵn,ޔ]f鳒w*ט_DG ]Ij ,Dg?w՘ @ @M ӂ,3U'f _mf5tigH%I^i!pE񊮮Zg0EC @ctI9˃P'*;m̼`J`9uwgyu  @PDi( @ @G/\ v^1k CY$0/N[;3GtL  @PD{jD @ @'N;5;mqf^,g hcN?mrᝧ;i:d], @": 70 @ @1"j}2 m[2z2vBɞ<TrAD,e8?|u  @Xu0Su̘# @ 0Phl̿GpSA Ū6WNTKNdqҾFgORf uUγGH  @ @ @}*pR)մbi6uv&['+&&swp{tm @ @ @ >t z_хMrzjǡ_]0^umJ @ @ @ w1 {]=jy{tlwt @ @ @,Sثi}/'~U_&m vs$h @ @ @ӟ[v]}.w`g淎.L_ @ @ @>i..Z}sѶYa:RO>3yu gW @ @ @A`\] 'NC?ҹӷF @ @ @zߩ(恿H @ @ @iV5pN{J~,{lLXMvn @ @ @`08i1CQ=ٻ&ND`tٯg  @ @ @ T崤]@Kטi6ȀޛܘjF`~ ?/ :? @ @ @`6 150ƱJ.MF`eUYzeNX @ @ @!3,ic NOɚfO 0tW'{ g#@ @ @NOg"̵Y4gg]Fmg9/&xiN<&M @ @t*ޖ0oAq|;f/wٴ %k%ymL@}O,o2tj @ @ @ ~/r}lnfL [fIw;,+:մݥ{'&X9 I  @ @ @f+3vѥ#=ճIKg[ٶs2R[;I:D#0oڟw/ @ @ @A<2i]XvymN϶GOqX97Is[׻xO8 @ @ @l8:n[ 5owytleiΝs+r9^!}jq @ @ @` <"#nYjYO2tߘnc^1{vGh;a9,y @ @ 0'(땟KE3<7~*{q퐜|k㤻ezxAYnZ=Nu[V @ @ @RUQ~T^vqR{Hg T1{Vu,Y5Lr^i|qR @ @ @BČS@' ?\Z_?h'POwCg¬/IL 6aU|jZ;{Srrvs @ @ @fjHҙvGeg3cE:=sƎPQ`tꜤ^ߵ?4 @ @ @-eF.4_ĶiUOK66>t/֏$t֞~`Fl  @` 2 @ @Au 3}g.H.HC{p:QR]^}[9l'Q{`:ZϨF @ @hftF?7>uk-:8'Sw;˿ƾ Smz ߈ @ @ @ |%Lzm^iRߓ^c2MUQMI~_-@=]dq2gc  @ @ @c8+{*s;Kd1%>M]gpZih=}0 @ @ pgztBIJTSf ^eݤ8jݥAt?9/}v2R?GZX{0gl\: @ @ MF*Z4 ^{d1\;"55z}I9dEnj*IH~>G @ @ @Q^m#I׿?蓠\z6UݭGr:vHrmZ~mdIwqBf@^QoȾϸQ0 @ @ @` } sU|5gBwqgqr= %c^bF`"zOx @ @ p'ӳ]?+wQSy=î.>1۞صLX\;bHf`qF l~/3Y_' @ @ @J`Ռ^)tOm{7zm3v{iR-I:o&NV^d^5 @ @ @@ B\{Z߉(_֞}u~gzɠ=3$w /^t>o:E @ @X vAhؕ8DIN0_/J&jxݠMj\H O l}#}["@ @ @rl(,盨CȉNI}i\w.<γzɤh[CƍIu҃qo @ @Abљ_OdiӿѦ Sٍr֓.rV;yT\5 @ @ @S%pi.tsROK;$g:.l/~k-JiD#0U}_Hwr @ @ @B`}4g/c-v>Tve\ɫE`X^%B @ @ӻ@]݅іx+UzT^еM[f  @I3IuZ @ @` |̙3爤D`ɒ%%.XVӒMZ3[Od>f<N1>s8373 Mх1| @`Vs9 @ 0KPHᆙ~'OwvrV+kddʛܔ=>sc:n&3؟wm6}G@up @ @iGi ,к7ec{H=?ۅ|P5N!0[lwv̹73^)Y@ @ @&N?sOwYhg\?  @Y+:ko @ @!wrg^vmVTd4\}|q|Vg֔L"0:  @ @M @>X+ @ @ @vEi:@ @ @ @@? ( @ @ @]@uo @ @ @ Ot7 @ @ @iPD[ @ @ @"j? }!@ @ @ @`Q @ @ @$OwC_ @ @ @vEi:@ @ @ @@? ( @ @ @]@uo @ @ @ Ot7 @ @Y+_<gvqSj>qe+Nj.y @T :Ur @ @YK%KF/[VU|u4x[Gt_?g\|5[mZ M? @S!@ @ @8+ˮ^|wizKOi|MS_'GVZ@u  @ @'pwhtU9y՛9%Os~7KnkÏooȓ7ykMs4;_y7k>˚_fݵWiyAب3g_vխͻ>wI?ݶ^yS7jgtE ͏}]s[͛M䵽Jw]{z(f @|'j_!@ @ @`6 TᴊwffO|ENr)˥Wo75͉]lj|O7A{`["콷]X>"(齷,,7Mr=w$ܳgξm bmq}P^wl{ӹvm;D9LN<իga ]hJ;q6y͆혽Y;9k;QKٖ׳-M NvjWՎ˖O=lU˞iQ@FV〸4 @ @ \%0k^$tT f8/)jt8f C_d?ٶSD}:؆k׵& k&'\ cg;mܑ5ŊXsA>?QPXp\2.Xt5kNim uG T;~SU-` @ $$pߑfTZ.evZyg>*Ll~fM2{4+63كǙM#u)ehi/(߻_SYuW?_+Z7ns=q-\o%6W$[NlݟG>qVOaM|y࠼,7>s^7jYQՆ C @ P"leY/J:~UUIefS)tQI59WUs[-gIq_>c 4kձs$ \t?v ̗}y_1{7M O]jwgٱ4ŠU9~Mzti}>Gs&'2 }~М؏3iT+Ĝ9Eگ߱m^"ed @@'j:@ @@Xm&y ijG:ϝjvfsO~}5KG mz?9UNL:b^=Ī(؀4/!46V}ɳݛۨ KmА6ְ~A5 NӾ=um{v}u]3X6mR5#itM^bGDlCYaҀN4  @  ͦe֦fg)_>gZ=g/2ۆ+:TP^vf5ne/=N*g[7YkF~mVȡ,4p4mgY{-z5rDQ8`/XƯ>fghM q-Q4GϒNMԿF7attsg>6pFk>gE,V{ϐszukX;r[huh]jGN1WRڰ)uÝmlU+֗W=S~me8yt;X%1@@( Å @ @ ; L%f6;OkyO5|*iwfAVk֞*/m{)ڦsYx;w|}T#{p/-'z/ߎYЬv%q?JˠY.;?wIpvcM~qK/g _p~so)3UaΚu nubֲNrzt-- D,a/mArxי3 @ @ YK#>LQmmZ/EV8icY[ET͚d6O2;xnrjhl6V?mE]Xݳ"Z74}Zǘ.^g[{ٕ ٺ˚av W)Zi>3@=s]>xzkO(z @@NT@ @җ@` r̗( T7iTW_V.zr{3^sE57/yѮn?vz;~7Y;^?;Ё}SX&gw9;l8Ee1@ D]  @  niN4pk܁gZdiv1r@ P= @ @ 0D6jYxa/=٬q+kkUE7Ƭn:RyRD~~aa3j5hi$'rlgnJ.-M}V74 @M'ju@ @@a{Ey%n{_X^: ʉyfvfK櫸 Ǘ<{Xp~7_/,#]Y=o}Ĭ^g Tt @ @@ }<gj߱ӵHPfu/S_աM-B5IB2@q\;BSkzRe[.B\%Q,)JGP:H0Do¯%Wrǩߓ1R28҉[;>a @ @@yh vm轛 K遳mlVgۅc*Y}]\tPܞ~wg]rlK_<%v˹mV&@P%V9׵{n6"U-Eo]x']CPw/=J!0,ydG5{4*NTA @ @ Pg/]iO*ѥUzh-jYֲ+b?>5VA%3U}iO{cyo*V'RI:觤iA @ @$NPw7iZ5IÊB{@@(ɉZ}84dȝq-%"uF~҇;SJMNI3$ )U$wT g"?c٬d[=-u 5-8Jx}=΅uM2k2@ @ @ "Nβoj8P# aGNEwu<ҴXw5>NV}*U5Knc0/?*!|wHWIdL%IGJuBוIo v I(mu;kév#\2KÅ!@ @ @ P<6] )η $_@:^J:S&s-$N" ΕImRuwGRɣhytPM%VCVrϥۤ$|^5F&%99sy/8T[7'$y}!5JuIwIw6<ޠ# @ @ @ҁ@qNԕ% nlu|LSrӒOEҏ]!GrGԋ%wv.NG:Gt>#c:iCyD屒G =!=,5W$w(*͐8[Jއ;OܼkҿE[2p'j},&9oA$^Z$vv*pߥ+!@ @ @ szbq摘(Lj -l{Dj/r $WYTxA-C2)Jy *)ܣ0]n>Dm}YݣRGoɝ Sbi-ɝ%7:m(ەѵi|)J%G~?#[=,>|}ᇶl2;m76'_'N_=;d]wwֹܱ  Paҩpt@ @@.gW_}e=͘1.\hK%w8SrIߛ$gTLH}˕93O@ @D*{uxTah)Ϳߝi @eIE./K(#;:R]ޭ+$_ T@?42@9Pw@ @6tWDh/|G)"B#.}I3%i>L2KcÅ!$0EKW /S~+ɗmi/KIQ{FHѨh='Vz])}Hz[$ @ iD&%X4?0@@e7HQs*Y*r2<4iqd`&m)ϤZs:P\  T#O[%wr=xY@(3  ҥÀm^Ӧ%̱zFUhiԺP)fJer.JV\@誙)E[2ԣ0@ 7 Ӵ<*uR+,_4;]n/!{Xpvgn)I e⳧ac P]xs]@ kJw";AAJ_F7Q-H9Y@B8Q2%@ 3 tlZ֯W%5s]4KhX]JP om9ێh]kݰwvkX~~vr ߸+9t찆l,`ȿ/}/-ټWzߞx͚=ʜ @ J$"'+Q\"d!kӣO߉ J$ϴ/˭v6KA*?1V!`6U|/7T_ @ K y1-@G`_c~wXxW6r֜JVu_*xTimȞu16hռ12kookSxrN hiZsAHGC^R)J ytjԾWG $ ,U{*t4A @ gԘ "Sqw [mhm6mf!w~;^;1Y5lkm{`뵈GXh۰ֵs,igU6ϥ+VY[mʁ,_c̉:RѾ8Q>];w{A6w|{Kn\lV`n-//]IA% D Qr=zBˤ Q@ kp7H곒/X"S:*J=* u{*' @Y@'jDda&Y6T~?a'mնZeb]x5S|Olͻ?ڀc̝u5[g50[)r{p/YD Om.''O]YؑBzXϿX-lVN=U%~-^dU @ lץ"g)4EB4ok /+=EO؟MB=s2J!0$!d:~? .]ffݺtͱs&Z:-%[v-ߩGX-_}o|:k=:BYfM;m_Db[kmnQx,OxQcm֗-[p=N>ڶtCӯwu؁ Q@@ Ju#-*2oFiH@q3ɗR K T[(KI:nҗ2 pf d@he5bHE˖ېSͺrKu4壐@eߏvmt^ZX\ FzS4ԺI≿A p&Ba@QY@˳͛٨2w8{!@YL͗(}Y_#d#=DD%ю*)皛}ٟ7|9[%w* )Za/`  4&x40C @ d wu)a.uu )@(m%PN%=J W99o)|$8P A^!sH%,wI~ P/F}Zp,)^ @D&| ҥKv;c+Wv[YfOc0a͛7{=;vl+ !@ $O^U/m,E$#UְS_uRjT _%iȴx]oAr%tZJJaG? =/5 3У+OSe?| #)?@V ;SRF)2Q'*|t_8%@*"L{}e֪@TۡҪeN^~壕}Fz_JKXuK @  os/-s!l*s{t IEE/C!Pi`ONv|ҨW$z"E1ǝT\G&(ZrUgVJA !0]i)JO:Q^z]#vI P5Dιt[aiU鿤Tڽ,?&J~B @ t&PWK~ = ET(8P_,K2tK#P,u ;$׈ lwEUJ_):fyt}!pyp;R4BT a(/_()2N="}d7^$R %xX'ya`ۿ8@%5{XUrVu|%\7߳꜂@o.u RJydEI]-ɦT@*w.R)WڡɐR6 LiG!U+ YJ[5P~Flsw J4w(mO5VydK&͓J>=R)AJxL%2X @ @ @ Fwi4 *Sר02tm)ԳNCGHgHIKmt8K4p]wVm *#/+"$GJKJJdw濟} @ @ TKB9G`f|[d*QIm)Ru[qV~mO{t%~K;I_I @ @ @ i8QFU{u/Z+Uj<0KU҉;6 wIKROw#IzH$_kiыã1HFN ,)0sGھ$6wpzԫG^(m,!]$}"yKnJIy&bG)K Jl>ɥ<>%R.(| fvM;hJJ}|):fHIJ$g{T @ @ @  T=swi4^rG;DOHOXZKr%jݱ,K+/'^:wKduNܳwn%}'%3 T!ҳRgU#M~Vؼ}Jo`[(KzWzF _WY{C:NL#cڙ&|J+GER/-:۫;;JI>K$  @ @Ҁ@4C Zk&T%v<܏I8u'[_iG)K!%-zRixi ) TNԊ3@e%z>']#y$;ܙUj$%r\6[FK<!҃%  @ -ߝe.c7Ҏ~/Vh_ЎvmzHz̟Ecݯ著gQow$oO=.?U8@ T+їm#']  $J:Sv=?| _N" ܱHrGtU1_V;_cNJCPXɝOy&b>?ҽϿt#&%cS *=͗mKVuj+]!͗ޓ0@ @ x~=Lu֍a&@?t&KqDkҠ{kҰBӦ l5ԙmvڶf]#/u}Qd 'PO;3 PVpѝj{J|ZrIr?;ڼO{ GHOI DZ,sR-R"BwGI>R`ois&:KȜx!'3w{& zd-o(9ӣ+J$`1AUqP^}eHJ@ @Ҏĩ쉷g1{7[/ؼvޗg;Qhқ/mk{oƛFm)6bգM,3 Y5s|ɶV NWE&L! J"- @#]y$&R35d#GEr $WYTNz/Ow]&d%3-OE[|z l"a؛ͳ?$=zv.UӚ6Q{1vXok=ud9m0@C'jpJ"ѝp]w.PQQ*̗45M :>\ѕe*c @ @$0~nvsv;MKvl[~m r͜j65a= ;6kފ&( ͗ WlU;( Je+ @ @ݲ gh R`l;vDկYhߔ( C4{O;6LͬQK-2~6h\AMbEMxfu?^d>|>|^f7~gwrQZlRwo=;>:ɾڷeWz_Ӓ-ֳ]w}XK~^[ۨs;& Zx4 @ k+2l~UBWN}ٯrVl]V'LL^d6_KΥz /Hagn؛fav]6A9.hWh@%gn+fYi}7Mn$gmZg.<#^lsֳ7k`4ݖ,[i[wo?\=Vn kPGh>V淅Ы3c\жۨ=Z5eoYu`︤y7d0@*ITW @ QA^ĽMї*J gU9(E_̾zڸmD gnm[۬9c@ @_zjI98lv>!8k'ZPT~2[Ĭ]#z)(W"( ufGßMۚme-|>/*[4zTϫ;QYlj[_e?s08lK'f_@:#dl @ 5=9 ;#Ow:̗]f[ʙn==&ї[-:WsmKXb(@U=s~ZQxDr2-~M#sT=gZU~jyq<؏ pfmc @2@vM5>r|! G.+գVXUrHa(+|榎6ϮfkiEV"g*{;Uךm{D!g_ݝ ZH%SU̹t2,+l`Qf dYq @ 4'Vz4և̾{ܬfZx~1@U>s\w^gf-m@U=s lEK_ Uu2Z=⬾f(z @ KD͒4 @ 5[mk=͖-6[oxԻh.GjP=Ѫ7ŧ_n[ZOzfM2qYWVlVUӲ?s~ޠP`l%̼k=7Wiϋ6]tcS5kN]. l&5.s @ Pj/=u"WnѲKXjkrj^n4Hվb-_2{47z#^ KHFPBUNeg#F}W',_are~N̕IBE'jnof @ @|k5j3mU1N+?{'E}qwGE:(JXQ]5$j5G 6FQl;gnw9ݻۻ{3Q Su,fk8lfe9WV9+@e=j'-;u+Y: Ts:/q |1i=4%jdg]kٍ}Xj7ߞYd]kKVl.s?W6{_XbcXguje94~H nק'Z-vPZvYm9%>.@ZѢtU0K@@@@@m |6qlkT/NY-dYcۻ:yQˮWԛfyuMUyvvα lݴvmslιVP۵C۰,`9^P}Õ6FawGy4@ Q @@@@@}a̰kk zmݣ#H?=zTw.vi?̰^\l^/<"[goG_=W/0b2{=}k+;t<^L=ܻ5-Z @&Gaq ϸzj9r}駶qTA-??> >?zxd-Z}7n.% IV_^@@@@>؄P}.uC{vvY/Ͱ tpW;ѦVmfO0 tj]ʹCߓδ ñ[Z4ζ^^[@ TVZ?-جYZ+Toͳ>}t hC )Sk1c۷ֺ*X?[YSf:O~ \mԪan:{'-\Ȟ4i~s9@C@@I@IDAT@@T`Mk}ܜ[K˴l\#8nƇ:{ah~_k}tߕkQ͋4@2ߥ*ym 8WA/w}7Sik ]M (zVsI$@m:u}4 3q    T@bʵCG~#E;i[ OnhҟfלmɊPQ,[vspgD4/X m7o}L5)@ $S7uOwR6_NlUY8.u43)r<,1+Y1E@@@@*U d{_͛K~.\[\Xܨnsm6jB)w}ʲ:.F.9ܵc{`nms=ƲYF&@5i N3J f)trStFw|?{d ܣ4@@@@@j.9͘.kr=+Kl]sm[_Zn ycXr S;p |]|4uSj0xTH;ޱҎRw0=Ỵc3Z}hWz)+W4r8>F@@@@@ \rrC߰Şџ ]K6دCҼQrgvc r3_7ngdfls3 x[k_ kT zMkm. @E PDHuO?dVz1 %GN?M     PiYvo5g6 7DM:Q]}vOԾmװ mc)jҞlmͶn}yaT:]X^(Gƶ52T90[ս},}ZU $eF vTTy6<[@@@@@DZZY)oW;.ZXH\ "[keϔ_trRVaR6/4/h}1yFy@18"     -@D)nÍUy?YMbjE4/e]/- h:jM{_) |^1@@@@@@@"j<vjF(Zz _t |Tu2l:T1$k?󺘎8\~1#Gk]%#r"       PDi{s=z*,aF?}J iOJ22,˔aRvY9+%LT@m 5J/cg@@@@@@ F"j HFO2.{^1(7++d״D2Rl:c僘NSJh=tR]i       @Q*qZz=hO{]Sl5rrMf[I|^_&[*-Qx: MPiܮ[S @@@@@@ >9mk?v1w~}T&e&i|?坘7p&ȼجWRb ݇hw?c[ȏq@@@@@@ URY1wQ*LO(=8yi!T5{It2-b yGGg'p      PE}Jӏ:^ޤFYtq7yXM%xIDt_ŏx΋gFA@@@@@0ZgZwu,sdV\W>V,fKf4@@@@@@PDP wi%L֤+u>ۺ|Tc}>X`G06O+OkngFA@@@@@]"j?5oռÔ ,Y+ejWi.W0~4- >av@@@@@@ n˷ eXE-r W"i%(ucVJ~mkT5UzK{V-u.D@~Z;%b|اѶ-f󵹃/P.VQ6(4NVQl#  S@_˝r"&aq$Wsʻ 9W" (s.Ds%019D6VX/ڷdMTTmhô (f+}*p?tQPh      $,AKl 5]ݫd(;BkRm_`G4J)@@@@@@HXZ"Rg)Z;dYe|>G4@@@@@@8^sz[eR]N|.f5˽'v\BC@@@@@@ .?jܱff~#g`N@@@@@@ $:zkǔ;:cAAlI`.fUk3qe@@@@@@Y 3Y_] `ߟdY׋YX"[ ωYX[@@@@@@ddQw#P#TxJTcDS+c*1؟x}Jpu۟d{~@@@@@ؾ@Ҋh@@@@@@@RO 6im0,q@@@@@@85ڃ1@@@@@@Hsi`@@@@@@@ Z"jc       Q #      @Eh@@@@@@@ (G@@@@@@h!      @ PDM'      Q=C@@@@@@4Ov@@@@@@(F{0      i.@5͟>      D PD` @@@@@@\"j?}@@@@@@      E4       -@5ڃ1@@@@@@Hsi`@@@@@@@ Z"jc       Q #      @Eh@@@@@@@ (G@@@@@@h!      @ PDM'      Q=C@@@@@@4Ov@@@@@@G76ےjfX0K\%@@@@@@hD` @@@@@@\ iEԂɶ22222Hs q@@@@@@Hf_u)n~(*?v}Ū>Y>%݋YI*LC@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@Jx @@ءn)w)V 8r܋'ń/IoTy'bTJ~ὫY*}[D>)~~}*=/[Xf2'   PCEdHg}R /IoTy'(toJ1G kDI6TXWS4%xD7l/[m\( l   /}>zS"U#ZR#z`GNTَ*JXvT~lb*۱PLIT؆/QS#Udʝ*_򣥊ElGJ?uGZ@@@@@@RX ;MC@@RReIJF=d֛n}p/I=+tE/:Ν*S}ehnX8t{xZqj:qt{:IHxkvJx[wˑ~@@@@@@H+i,  eL{%}VoXeo]5;鏵#?}#ێ  P^65Fل*襁UlŶzΛ3,@JXxMIQ+C    XAAAT?/.| j X[u_}R{C@(]2>ad[7{=ܙCQӱg@@@@@@ئEmp   P~=_n=>RN~^&-/KW`߽~-\~xl?tnO{5k ;Ӭ\ճˁvdG0'3  @X?<6tqT^S-fux6_>8{g|O.PtLzh ;`=USc}yTۼ~K猻}}t_i ;h|ri9GޡUT)k,Dt4{H#_mrRlP)ӊە!?4.#  @ G]_jr{\i~]KM\4Vڄڣ =.}Zhçn.|u珯M[6٥{]c-dzy{T`;vfs}n7U]hek6kj5V\ '/  @gͰ{lK%囊f[4vu}Kk~@]REϯj[6X[Y969IY.}7-'dF5n`ܦ5QWqB>@hůKz2^)h*W%|ZŻje'^$%N;k)J{[Sk0%hi   Ir=;q{ġaXI'D=bZ-óYݜzCag X1W6fV`_b} Ҧ}hz>#߷%vK_{_g=e?)̓n'/  @ݯѯv >}k;{ : uM*rְӾj ^sdE0iszXV1{\dpz;Nr]#WYe_ښYf7zhSإi~B9YyUPJFl^>:O gۇGG~~JySQ? .VU_`TF(>J6DMӎg@@@ l/'ssnv.gPw:Ԗ/ fb(\ɮq76#]K5?|{wVKӾd~P@ fOx  [ζ튖EQs3m 2:޴8_7t风q}OOm\a5jgYC]:mby6%is`P@ f䟐QP*CQsh/v.jT~tkOxV$@@@@b歙U ᵷ*XGHsmǚ;G-#Iv<  0f6^+nZEVB(ACQlͬzv^OvY#Zގv~y~8{{'僙,,cT>͊?:>:Et4kxw,\zr\^^hZ<@@@*Pqn`m.Z}2-`FۥKֳbr۸e5Nܰv{C:}0;ÉF-#IvpMk%u6=>I~f/@K I0n_S1T8K6FTݲtSo :ײ;08}p٬wǺ.*-B ,ó#)m0-ff͖katpe?/H >]4@@@ hW~|helW&?S4@x=6drh|w]tI?Nس':7j|XZ-z@XsD ~q=_7nh<ٞ>z]9;];gK 2,0)v{}[ iR~dQ/ſU[[ 7h_`T^\:A9 hkzEeS¢x5So8K'vڦrհ'7cv]j2Gkz{1tgN~6byr#xB|P;aS: _]-M6|na4? g*ᣊʃJ~ɭ=r꺠튝.iio"jL?+ zCR>rJpQ{;jY9Wi"+tz9gWMw vr~~>KLRzAåܧ\gCPUEͧt4畊O{ %Qp;\xxA¤n@@@Xn9oc1UݫԷEW9Ƣߎyt+[:,OGFUTɶPE^j-~N7F5jޏ\$Gc?7 :tM{ݲ3SG؄cOo nQ{']WY5ul޲98Bu=m/:u~[큯j}_bE7B϶Yig⿢gHT蓟|o~Zp>G {k6ɶ65qw̴ ;h*tjT3(wg{˪Տ(mYZղu:sK/cm^[ˇ@" vƋ.P8Nſu}WrW%e/TRnU_tkM. 7S^x.PYeN'@5ٻuV8iE: ?B]GlӺͶU#6bٵlܝ3sfld[i?;}Oj?VMϷڭjGF.>u 6]/5EMioY#N#<[/⭽R_)mD 7=]iDt Urkjh|-4G`@@@dl)9ugۼsti wk"g͋Iujx=׻Kݢ +ֿ>CLJG(G\6ӴOⲩjauJjo):*9t]#uuiҎW6gLh+llw~[W~p{p]4 ck9௩#\+t=|ܻ4jū5YTs{;8MyxǾ[M0ئn𤴽{}&oXeJ{olְk-9^ WOonYyʝ,y 6)Yv;1w+aE_+y9__O*%l{qӋ}J7Kk^\i(g+v~Ps_W+%5 e; u"r^^J?ByEq 5W[WUMlz;]w޶lbcΰMkLlj7~tTMW#8miF]큣nJhsYoi^D|JGҬO;S\p`t}sҋf}vkz<8eCg9Rt]gxy߾2YծpȻ+FgC~xhU<+NJ|n^['W }rX^Qh~:yvp;td59EESj }Eq qWaM~l>Fwv}wǽL&_;DNyo,Q):$.-G9c}EѯVt5>Ni(o(_*yߝ4n> 4ĤvZ/R)y% NyoRxTk_S޾Bjuv,U qS*_x{%޶fl3!_+*n:Gg?V$Pv]t:__Y  AZߕ+74eCӕDZ;ͼRA/s.TQ-M9RtElVЪD *yh@@z6NGѭ޸*8nWQԺՠWR(W(>_G~,oLR0͊?%/k5{t};_3 ͯQE{>uu$k[{^Yta?}n"{*+*uX%em)_+^RV}l|QGWN+ ~ݺM[l{Zl6%VOG-ݮlP}>z=8'ZSm%!/߃p+$p%xnQ(_):ySYTLsr/CϥuJ67xn7ş([^㉶p?zÛ_b}LVٯ2d٬w!m+d_ WRSG)^mEx )'4ՙeX_ɋ3(ٯW9_9V.Ѷ|nspZ[J۸j)|,~jj {˓,x+ T+n_!C۹ZK_*?DJw6 @@@ ujֱ#pEzJKn /Kcָ_t{(u93eIY8Rr:rzP 1 Ķk үmQww鯫u%}3Gt=]ݔehpA2 {)ŧUh>ٸy>T~=CّWVC6 }Zנwڙo- _ko£u[qkV<)j;(ewI&&7Kto*)**:h5tŷ 6_ƾM5}ҚRoRJzrl7e[}OX/Od.ʝʃJUkͱ̳Mmcoc{eVXkН$9>$rB})WiF$( %urşQoʶҥըelX]wwv)uEMuܮ|͟3ZJTư   ;@SmJ_E~\Jm7Η_sw㕇ژ: |^ۥ{^ͩoN =5mY⑪1qJ쑨ס{F4Q3R᭲į]|;gSmwh^aNE[Wx.ZXwij϶iC7,ʩ̯0=W7;v-2eËt$!oG3z~7 D|XyKVU|:/tz*V -<Ω^9\s>\yVV϶)//0y#\a~Jfdw+Y^q>׌F?Yx< R*Ud62Pf%şٴ,?w+^d)ﴝ#qy^WS_*j&h   I5 UjV_G_|q12B7 _^/TTvV/*eE[5rۤ 4N=a׶s]Sh7[n֣͂^Ṩ]xƫ`Ŀ~O9Ui\|x1 eyh^xSD^$%Oڶ^@{t]APӓ9 u[4f5V+ȺE'٢qNm0.n>KE%Wći)m/y1S4p)^d,6(7Eٺ1ʮL{[xY[ }_# ^QJW%y+(ݔ텶yQ x\IJKke6Kۧy!'jz9GROTZ(LV~`_4GeɟoWQCZuK߿+7?nQ*aV;   @J @{bC4}|DڤGlY?wW$c`{r$kfQs[UsN-mzߛ^j}8v]` lu-+3nu veYvڄc/5W^0x0b%6N3wP_ P*%Oeg0?گ.-ƈu~C_lins:jls\#]7ٻMwee3\j>^a1{f%ŷjs5zRRF=e.R:JoEb{+*-xF*>葉ݕccUh`pVEh߳(^LHytEJkxt1F=ZW, \m׭fVҺgPOjWw=C6h}ky=:*Y_{me].ǩ>)((I&?0D"j4I0Mk6۸;gG3kfNG4oikC=fhW#O9tu$('ed&-\/K6OwVGmLKw(z*~Rm+tR?/x2ԿmHwN S۞z彩{:!&秽~x|5Ql+)RIt-3@MIt|?9F2w-;G@@&Ggvtrg[X@IDAT+ްy~6u;ZUɓ)h ;tNѕ'Koӫڸ-ڿ/nEH__ 2{Lkk֮[܆Ic|z6lwY[mVOg^mi)xg{Fݭ5o~U<[Xﷺۦ-icٵjNp1Քhǿ~oDmҴ|B봭6XwxhJ 7%mMVmeCiQ:z9JF_7Ŧn#~аP'1^^'̋yftմß5>6xUjGl{N.gH)'?-6;R}͏'GP#ε vN9 'ɳMt~T\l5rٹ֠s 3\F?@l5rU鬔V@r $㵒09jY{IYʼd_4\no5>`]ƬhW NWyCpy3?t]쵽ٓ>;}e_<6/'v^;8B{?ڭj~CۖEF#(%|j?J +1K)?+~^‰x+'$H[@@@ja붵MVuGoη{¼yfskczNYSp*+'ͺe_q2_؅o l%nyvf 8} ,LWTs]d'r4*]Цnt @@ M^ku\\?{m+퍞#^Vξ}`oG;WOͳ׵nWdGomlCRz~ ZLW^U] Rz2JO(^t 7Rw M'   @ ;tO: X^d؉Cg9mv웅#^j^8͠by6gëv;'~>gYidBs&yM+d}I Y %tfvؿwL{o,y ;fP i6xj0&?6Z}{pml2 :Zv^VEm^gv1%G9\i|>2 ԰gPyDG~n?2 ~ŏNl4 7ergxbzRDMg@@@ rr ~Ư6.sGmmu$k[}byeP@~$@1*UE|6cr6{LmFUvnȤ}J @VNFP_7{F6ӕE,~_o3\b:ZfO&nl >VN'§Q⏌R5)(z1Tb,śƯ@ u|K7QDI  T@zanQgHGL{ٸy>6h}ҦgkA@@ r-Q`Uш3Z$v5koe)u9f5C ZO#c蔽~8=m&}>Oius?&%T¬魐{YwS(~   ;@E[ެv`KhzxnN}{w5[,<9]U@eɖQΙa}~ۡk}`  $*8T@Z]:4լmӆ.&oX=Sי_caf]JO2e*h*t^UkcÕ1sʞJ買j~Vet{snQU"|ȉçj?pwJxYZ^?y:*Ն6u(/UWj)نkeX4@@D9 tik۹)]->=־Y0z4SG0TaPo;AP+X  @yrd[9hjk5حVu6OEVYĴabj1?}΍-+ d{+8*~"2Vx SCc YD8wnC?M0T!Dm ͶG?~H/ g(G*^ C4G'zppwʙ+yF x-# W|RTZ;QwQ))cw;CF餜AѲ @@*Y}o?|x}[v=<*ձ,;6Ag=닿T%Oeg0?Ҹbi   P{>q-Ngըemkd~&{ױζ`xKm+{:ZfT+T^F24" w+u/:ʽJ-"T|Z)l/vbM= i޶UD)J5h?hxr^Dafה]Z 뵺y]1#QwO?}[QVQNR>Sh   2ަyt5vWi[ͬ;uOP}OjWw=C6h}ky=:>{$>)((I&ئ-.٪3.z[0@@2pifw,hf -m-z j~;gV}Ex5#s{ǙŻ4bWPPj*=?0|x0 {E)a_6pWbG"jѥfcy>?ыt.HzV e+śQQf*O)/ymRxM|TlśOgj؋)^}KA)o;L+oߣ@m>Fӷ=^TrXz)\`S?US*5ZwT(Q^2*noo;j^v(޿*CvFP|ݓp۞Ix>n@@H{]wEWGY)Qxۘɝ<_vmW%oŷ]tQVh(*mJ<&  eh]햛k?qђ.@$@@HA:mԳ?mJ(#cB6ZUD\xЋG(^{*WК|UD"cܫxQ=ŏp]x PVkLmpI/pQG(^Dͣ4b繊U#]줸C7 V2Z7"6;c;6Qq  @j <[v H^lSU'U^cRu/zN?-1 6m|ZI~ᵒz}$$-.e *~4?Q^ 7?qix$ 'F7Q͋m^ l["G45٤yVnRmFȑNk[?Z4yaSQe{NEL5j??m[F=U1  !ФIx#$,fE@@RT"֏V" vo(^;N_y\l~DZxKXЏ,o+Y#[6dz=hg8Bv| ˻G"n-) U? ͗;JǤŘ  /PPP0;##'(Zʿ$0>I>-_%^'Iq-ThTx''EKq>)Eu*譊,s;~JyQK*%Oyo[OSeIzNJPwU(_u'n~o#۸}Qӽhy{<^S#x;VW 7߮j<-x<  poGWn>)L~ORO|P> GS9Rf(U^+w?%RO _XI3UTH~޲b+f omqBÏUʅ7 \xA Y_k -ޝcD.V6v>x!UZ)muxUP|}nJo,ŷ՗Ovxq׏*NEr/F5m{&@@@@@@j,RE4/  oK*HR8ꭆE MkuT+(*mQ/¾)\qI[}n_3q_֛PSu[Z;RwNV(t7*n-8F󆋼JVD⯷홄_@@@@@@pZc _kWX4j;)e35AASi!/&G>Nz߾Ʈ'L]!                                                                                  w'rekr܈ܨ.QQ\ʑuUi4#ػueVd̙3Ϫ?B/c#=lA@fqg:8, @&~giq1V @@ LvكT@Kbq^?Z,bP#@%浄N @g|Otp+#@p'joE^#蕻n{uNq 3ݷxt:|*?r?\{F74a9mqGo(& @Fx zA>D @@]weIP镅mc9_nqGo; @ 06?kp'jP @܉ڋ0|~;v:K;q!ԅ_ao 5Z ,#P @Z , "jP @܉ڋk&@ @ @xS\˚bM7zH=bTN#~KNi˙e) @UT0ɻQ7lB\vy @ @5来6mVx;Q{v @ @}q]gFl6li#!@PD @ @ @`" x0y]Fͅ;QG*qʻn @ @LL|5X>ɑDN@ @`Qw.j @ @Ư8*?'?k"ٖYr/{{m'Z^E(S|3_?7f"s6 @ iiq?lPs"5}  @@Mـ#^_!|NJFA/wRD&S,f-zkPv[oY.3C @\ Nd=߮HF%:*6 @;No]lѳ[ͷr&| ?O>Ԃ^ir7.Njlۜ9snX  @ @-8&XD߻[~$;$@PDٗޅ bҤEg{uż~uIq!zz]`"S▿*>nS @ @3bwYc98l4"ڈ1c}=(]i~,ҧcxw~Dq{Bge- @ 0 W6l7;[o,"b٘C.~"}:hκxaGq->x،w @ @#<(R}OX#h@ @D)?-ģ7-ch7 @ @@'Yry1ŕ> @e-#%{{8wˎg RK-KZ  @ @X :%R; ENh @-maSKO8)x׌IL#6b9-6-~p_ߗ-g @hZͱŷ"cyFMOb МIn0iR_/us=FsȃO (wwј @ @M 'F)R}3$ |3 @ (  МQ: uBSN-9ˍg'72l\i}h @ @@Kh?un,=wj @m'v @ @h@`}]اc_#"O@}L @ @ ~ٱf?byy5  @@[Qk @ @ 06>2]f]o䊚~ @~ @ @ @ݿP#  @ uͻG~IW#@ @ @`BLBu; Hu @uL'A|G3c=V< @ @ώK̻9P?Dfc\#@# u$ V__Ź̿/ũZ̙3akqvȶ^xaQԉ\:qd9nzE7G-wW;4_0_['^Y-?坳YVD#0&=9Q֜cYҟ]M}m0ܲ=3ˎivVME-/7ܰ8c}GWޤCfAy3(6Q/l8*|MHa{uQ @:.Sqɼui @q'Mq #bF[La?1 ,@Q{'v@6 ( n  @q韌1Rz|oq3Cj @u:ˉc!wm`>fhZ`ԏmH6 @@s~jh @X6V}<}SHyAD54k7R0N hzo$-0;(|~k؍!F#*Fd21ȿ' @#'MԶE&c#sjWZ&@L,#(m4W"jY4$F @ @@M=bn @)iP? Dl6\c 5 @ @xi <2{wƺxa h @@!.uGnF-c7i95 @ @$s\/#CO;:LD#@⊇A^1~1PZ1qF9 wC @ @/ 2{l=߈Lh @8;|j?cT_H~i[ǎh @ @,*gt]Z{p?l @=/pw SXv<P<=;uK @ @ smdԲ6 @ ֊ S&}n9qt[tD @ @@ir"2i]# @@E`N*;=t"Ý:}O @ @Cq=2{hg# @@G_FNnlI=wvN @ @(NHʾ" @0ѷ0t˪D}k>D#@ @ =ߎ<)߫ ?l @!T}vuːD~^}rrΣ'.~(2"8jOfI~1wG&2}pybP<=Ή퉘Bp]AiPW]h#  @ @` ,"j_l @ajfU= L\ck3őVy՘_Sewf 4%pi^1EV{r>{]//#KOM#ѝra1Z{0s]um=<ƔmMjޘj_ǭnc @L$b>=H]KGFr!" @@Y0Ow6<8[.@'.37#G&W}yp]RH,Ì9D}*֟;Cޅ_= @K#s"PgDh @Ouwf/5n<9U΋|r%m%煑]37.OG^e$Gl9" >}lG"9s%"CfŎ wZviّ'#Fdk_ @ 0++Oh"gE^>U*5 @QI4'D.@W{57ɥ/w]=bO"M-y^䗹-X}YH9,;rk$['ٶ\Y/r\$aU-a4ǩ^[n;rrHO=Zy؝F @R`j~GxّzBdH>}f$w @Q t؝M8#;>2mu\Ȍ3g'Z`\;+&O :vwz?H7\;M?BHYDDG)1sU_"y)lE䝨َl~`X1,`kCC-?: 1Cd#=5Z)Ђ[y:ۗ;F @^4.mDZ3|?7Ih @@ Q[8]Lj^`!8 )Ƣm5*e\ E,Ίg>r7 Y%Ul"Y@m-8iӼcl μ2Qڎ6/#m$SiK~͈F Lm~Ys- @X5u7Gvj|rW"hh"jAqqcq=cqh3vXYD͖ͼ5?Zw~ "j햘y\h`w;,F*Bygm;aǺ]EǷFFΎ+r{D#@` {Od0G 2i"{EõEf7: @`QhD {}$yo"OG]y&(0|o3_lpnkK#+GvyUHk @ @`< 䇟}C"õ,zᶵ"(Zّ]}G~Iۏ1 c] \tp$XlYT,rh$?}zAdqZbwn!ɶk仑|,#ޞ'ȇ#96eD#@CGυN;zs @}"D^Y;2\/|3rjɈF"b;=-}g$=F" DFG r1Y6wi~*H֬c"nac+>TgeJޅJR"[G6#5s[/ @;,w;N׍Ԯm# h"jlU8罺"泈zdH۝cc>`e"<Upo혡s\s>shOBM[۲Ȝ-ѽGNdO\q_ @ ~,) 81|T{ ln Ev'+z謷.SgbyϚAYl`֋in˻6cn;"z3C|pM,;/zzm}EN#wGfGq @ @뮑WGu"#Gci,22/ @]$E/S!@@ ӌF @E`8"}"5##?"DD#@T@K_E @ @@W 9Y8}IdHPNϊyӑĬ'@]"%/ @ @ @+g6FZNψgGNA#@M@ubΗ @'8∽L "gEf̜93 @X5HN%Zv_ O#Fh @8PD/S'@ _"}] AIq"Y8k5e4ȼFLE B  @%wV_MU  @@ ֋}3wݧ,^) 741 @PD&@ @ @1cK"et65;{^w @ /:_bH 0sD)k4<@ @@X%RMFcgӹ1E{]. @>y|qȶ[{cr nU~@1}eB 1;Fv(CdHxn$gGzkq "&z @Z, L[{|x9)b@#@`ʢ1?k|:1ui h @?Qia LlD̊ip=a\Rãہ @U`8|Xl٩c>iY0/#G4 @@ SH!C @@G?yo5X>& Z8庛מznqۡV,"tXmiŁߤNj3n53^_,nELEEt @R ~HY43OXHe" @`! @hy7S`#~}y{(vZw6uJ/_q1#8ۋ6hm; @ LCoya$ ;F4h^yӜ @-PDm @`@<_G #@@ݤ|CۚqiY@͓eݵ'*xfNqo}& (Wl~yF 0֋ŝv6?!3M16HeS h"jl @`5]z5?j=@%9+-И%L.\}, @tq&e,>ʳ{04>h @1PD3j"@hD`%SO/4m[oV\w߃ŗ.iŮ i<8d-Ͻt&j~ixKO+^f~3IDATU1m_9״@~s#[F*h46h+"e4  @@ xG_ G%Q?ŷ')>-V]GOA뭰lٻT[G\s-W[xf).ő]^2[ܽǶs @iɱE]Ҳ` ">fFT Q& @ (v%p @/0Zp ˙v_3%{^Xvŭ-]bj2#NqeHpێpo*refb|7xŋ<' @@@~w(.bi./'}1r`鵑g" @`B(NE @&@___JŐSb_'Sv^8]kK6iŗ\A:]x%&O*vG(F+XY0͂u?X].ݛۮU?3Xn%7C\ @Y"Y -E=;Ksƈ;LW @@W (v @,y;c?^_ۖ/XkxkO={}wM}8[+ᆵf)f}g]|۝ũ[Yd]  @@O LNZ,M#"8`m s]}%]r&4ʙ @#gab 8 V\8חbu,>z(݁8O^re_]V}{ ;hxcRsၖ @;I׉lPLo6mZad @Xqw> ]dޡzV^dq{(/ebM`+6z@5lUD]b&@~XҜ_Kѱ-?tGHm4 @@ Q[̮fΜP$bA;cf끖ԣ:*Wcܶ0;+N~6_@;><1r{gJ5.U-Y[OC19?^?P-ZR<ʻ/y_*^q#~5 0NyY -1ߎF7Fh @c :uW}?ZƷ󓩗F=F U_]VmYq}_bk"瘏s7VŬG+" YltrqG֫3;=~'b-?\}s,f}-ƶ; @hQG6mNE>!ѷ_䖈6fl95cҖS"D> @(;nU<1gNKXT|_vN~ŻOOoũlۮZlKEVt@ Q?&yWiG3Y<]16;vM\G4 @.PD&=vd:AH:8뮸GyG> [G5`uy @ 7r|9.+c#|~ vۮuŲKL-V[&mrWd_o{bB܅ @6 {,kEI˾tnB~p:?_,48 @SEԱy6$yr>Woqb <"Ee1y7rޕ @:*WlRsgԔ)ƫ88W ?ѵ-坢,:8-9jR5 @ (qFV9/kGIlzZg̽wCIo5|o{R7F  @ @._mrd:ɂiXTJ9\Ns>? @"j^%ᣑ#9_mWBq?$vvtQ1|^,vp&.-3ߏ亼K9V @@=e9'BtL`ryH~;8ir9Y$X}ȬӜ/L~<6 @,"ɈňaFceNΘD;"OY7ڮƆ_,m/n0'wk?b9'ky^6P @ @ 4G4r>e,来"jyiHLHZMu> @Qw;26Y?rnGrZm|٢ٱFLmr^lbL'W#KEʖ/rUY3r,'zH=,wF֊hwaHr~`:o޼Y&M=MYt@ʕH+F"nY;rg%Y(. @@[Q؟wF` @>R+̙3-ZjIK.dENidGu( X#@c%LhaRHE|VpVG]|9;"" @ (6o /'毌Ok|LpH>Zy~scs^nfVyQ兙\HռC$zH fS[,|o]EݠIs} @ư ,dͺgbq5^<)h5jEqR tY"zXHo4-P?sS"ˬF=,w\HV.bRo& ξ|-?5fђ,fQ6Crٗr\iL9^[pjhJ _k~Ur\wdAT˾brc|h(4 @"j)1XCjYĸF~3vv_;,19VЭfy"W^?s|ӧ" _~bL 凘f25)3sna"[L:2e󊥧N)򴩓Oc~o~ߒS&KEr]\k-et@RYN= Uea4ߕF_0Cdv9ٟ2] @PDow6.Aty'G*'l]m"A1HY @@+nكiM,{3^|vqg'?mmI_jA6 ev ?e6cTLctI,79wr×8zz}ccr~Z9x3.w[~gwejq"F @@||dc#x>1ƶsȊ"Cšc~W.:z,͍|ˆG1XBX:A@];)eAZ\2Ώ4͢S-YL3gNYwf0Q.W52_>W+iΏrR#w]Mށ_]F[9rYoD;3Fzɢh_W[,U ( @xbuh{ n{icby=ob]/wwdatz/лbךobhb1lJ@;ry9 bkZ\.rZ]_ 56 UuW]\WoZ}1 }K;ee9ϣp稍o,T>VI~wuuv> = @ ΄#roWEv< j{q{GCn5O_٭'gH6 ;&uZIc-e}w=3E&\h:\_;w6C[+Cz] Fgnd9rnS-rfAv9 @ r,h>PsO[GN9:΋YٗEqPV|22#4 @`d3⊳QytT;Ty y.F ʴ:_\jLZBG*8 @V@KIt^UCQYѭN5>ҭA\Fǹ6m UaǪ:eʔ"UӢ:~ ;#lfmwL>X/ǡ1Ȼ%X&/(i\WѾí-΋ @,GN;7_ZՐǚ3"FΉtS9%RmB>X/ЍfKĩyMUU"ʹ,&J3jM& '@Nj++x[^I;O,Lϟ9- y"g994 @ Ј@_Gc'Ůnk אloeA"k;1Vk@մPvM@ @ @C%W_ɢV7wɕב,5rlH:כf?#Uk@+n h^I͛ق @ @'V+Be*UYxi+Ɖc39,0mG>9Rע_kE`Q ;iTl6"@ @ @`HY)揉023Zl\)<ύm"Z,ڀjZI! @ @uΎ,:e)PHfe,ruH޵X@ŠvGb ;ilL @ @@E`߬ S.2N: @ @ @@}܉qjg,g,QvϏ1TX\XzM/דl @ @ @hɝ;={&@ @ @ @ NS>rk~/92kڶŬ#mRvLX O @ @M I0  PG`F)TԁE@GI#@ @ @I)M7jZ" @ @ 0NQ  @ @+pػ޳ww|_+#@Fxd_ W9 @ @-ۯ'@qEԎL @/.z  @@<:ҎA @ @ @r @ @@ |_kw @BwzwV5 @ @ @ =G @ @ @UEԪy @ @ @z^@ @ @ @ @*Z0O \{(Rwwo^H R2_k"G;y_l}hw0vOEgXLi1f636NUk[5mPO @ @}cp٢5 _X_.>37G4?4<0'lSbgl "Dݜ_:LH^ qcw[ykլѕo"y]T ͱ8c$:⴪mgH^ȓk#F g? @ @ @4 @ӻ.R-˗G~kŶ"Y0͖w,rj$G lȭly,e>rIdq䇑F[>6YL, "\o4]7ZfCuH%zCͺ,ֶ[n3<W,=FU8_-r`U#ٯ @ @ @;4hE#eˢeC#Y 8-y'v,0f6Z6V,xf7 +;<0ψֶ;65\;no6`_NI]98YL遑#1;2~= H^u-# @ @LpwN xg$έQpK$iYH~h%#Y;[׈H1ϊ}6vH监#Y G_mc~;V,D͠cbȹ/Gzt+b9 gF"D \?Tge:QϻPWcdAu_Fճ  @ @ @` ( 0h޽yufHa9v˻,i>67`e"2TW,F΍,n+EԼ{<~k#[?,f,v>r_e#(sO/ckD$=9EءZ#͛ږۿs3ѽGr\u _ @ @ 0QKw9 qyfu;w}V>{֌b^LY5u7S}fǞGF<+wՅHމnH]>s<+ֶu5oZ);R^E1xHɻbHٗL  @ @ @` ( @`ۆYתUŶhwqg2onG.# @ @zH`R]K%@ @ @ @m/,,laF5f {X"@ @ @p'j=} @ @ @@K&X\Xc,_8hX@ @ @ @BUEn<攙5畣5of  @ @ @@ @ @8]eTA @h@-hw @ @ @蘀"j @ @ @nPDW9 @ @ @ 1EԎ;0 @ @ @(ڍs"@ @ @ @cw` @ @ @Q@_D @ @ @@Q;F @ @ @t"j7*Ή @ @ @ (vށ  @ @ @FEn|U @ @ @PD @ @ @ Ѝ8' @ @ @:&1z&@ @ @ @QUqN @ @ @tL@cL @ @ @@7 (v @ @ @蘀"j @ @ @nPDW9 @ @ @ 1EԎ;0 @ @ @(ڍs"@ @ @ @cw` @ @ @Q@_D @ @ @@Q;F @ @ @t"j7*Ή @ @ @ (vށ  @ @ @FEn|U @ @ @PD @ @ @ Ѝ8' @ @ @:&1z&@ @ @ @QUqN @ @ @tL@cL @ @ @@7 (v @ @ @蘀"j @ @ @nPDW9 @ @ @ 1EԎ;0 @ @ @(ڍs"@ @ @ @cw` @ @ @QO9 @ @zYwߵ @贀;Q; 8> @ @ @]%U/!@ @E1 @ȟQ;c @ @g(c  @v ?kK @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @`\?52E>vv4vCpZ'S#@ I9 @ @V ϙ2}џD;>-:N]G.n @Ƴ"x~; @ @ @-PDm9 @ @F `ӑG#D6d[>rE\淃#wDidNڑ=_l+D=rG"XM]Lh @ @ @hH !F*#w>|YL8yb$35 VBo?1\y,󸟎\y~\EМ.6JÑ;#FBS_" @ @ @F"jn<9,y2"f\( Qw횘gnӴlLKˎf'iVˢjv2 @@tL @ 0ZGbx-_s"D>wq?e8f,Lw"yGj>lYxBD=RL5 @PD @ @F-p_l:[*E5c9%#Y杞Ͷk6 מ+NןӼc7L @`QJq @ @c+0)wVd_Eõ[YYW͢rkWr`1V?Y迫,d @  @ @ .iz[Fm)31VYnfZT*6̻M`Nȃ"]e{Nc5 @PD @ @*+e#/<!>F֋4۲ Mu"_<97k"ߋ Hc"96d1Ms"޳  @ @ @aZJp}scz^}˿i="FrkWEdGB})19v숖w9}#T">Aȵ @ @ @-X&q$e˻?.b:9c`ڎbtav @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ oUNIENDB`concread-0.4.6/static/cow_arc_5.png000064400000000000000000003106571046102023000152630ustar 00000000000000PNG  IHDRGsRGBeXIfMM*V^(ifHHƃ pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATx%U7rf$9!tUw5Ąk¸fa5uMQל,".]8ʐ%I( @ @ @ Cw[֣ z{`:KpƏ}qױyʮJ:I:~?1fZ5h_'N( @ @ @XLʏOūԫ O'wH:~YRO_ݓNY7+59__c79 W';kΗvϒHՇds^}2~Ĥ]˒k_'MڥaRm.NTN8/cZ^]&svmMc{VB @ @ @=&zҾ&OMN?&krI=}irNKRIM0$ %5q)Gg7&'uLd~9IWoxsR%Ue}aǷ>YVT 듭*HV$g%HNjr7ѓ,RҏMC%w%5q~u<{UR}LA^琞I}I]gˡ  @ @ @@M{MFwRj}ZR尤&`\nYo߮ z{Z|-9|vYvխ3^W$v?&k^)d&ҫu*MxI'5_I~#Se_$XGfΛh9a-˩iqu+R1w&'sʽ^֗ :*uoL8B @ @ @mk$vMVB]S*IMz9 :S}hmJ[ۥpxEgҾ~gSK:_$S7K5 }&￟yd{'ݥ&''Tq{ݏ SwMg~24Fy]jj<l32D=0Q @ @ @ [˨k}1fG$&{$5۞rr,)%5Q\_I䚴آY>,*vw*j\U4XS2)[$'&yNEkYck۶vjwoWvݒ&+q_ԓR_¨7T{Xe2Vns<W޲B @ @ @$In>6 {IMk IMl<=^rv&t^ߙ8{*>eMdEuYYyCR"C6W9ol7W-ʩq]R?"|!Jݷ&'֏KNk_vg[܃ξ㴶{>wY/e|6Ir|,K @ @ @ @ z}gҹ~( z~eyR5{RNMj¾{S*O[fg=9K!θjyHƻo&>[R&k~ɒZ_?ɫz|M?1)ת+nqNR~aIٗI=)_e*xyj&ˣQRe*N;Zl2Ag6Ook @ @ @ J7D O&`+LtKR&<|,Ԥq=^u\%YvXUwe񺝲c'Nݧ_L6K39=}3e;gYISIUtdeI]Ov?~L.O:UNLũ':mt:S:rcvO @ @ @LC`q{z mZvJMo_ne=~{!YO:YݒZ6ڮ]uIg¿Zj^IWg^ϰI @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ,\`?.Zd ;rʳC.]z p @ @ @-δp¼XhuC 5ys  @ @ @0i?Tc8:o/q  @ @ @ 0&g @ @ @ @jVY!0=`W/Yv @ @ @YOڏq @ @ @u~ @ @ @FV:'@ @ @ @Q0i?wP  @ @ @ @`dLڏq @ @ @uF}O @LY`GA< oe xNrVrC @ @#@ @}4-}(0e?qɮI|#+?l soЗg瘳''˒'&,["+٨ @ @` _  @ @`>%M|2yHŤ&u^RoXg'I$OޚT֧~^O[ @ 0:)C44 @ 6|+yVKcY)r(Ӈ&*_K ?I^|$Y( @ @`9%yP);d伤9yGyBW"xRߙ~| vv'\.鴛՞e/β3a9gIJcv/Kk6IlN.O Oߤ߯O  @@ @Q&|)9* l+W6j2L?.Lr=Ǘ IGISbG)}D&u/I ݲQU^ߪ$$ǎ/e;_M\2%f#+|]7&$Mi'  @X%` @3Ě(ʊgIM7'xiY.M,OI'jUv>,cɷ*5yr&5}̤z}Avԗ :}NòqR_D*H>Ԅ|}UIJCo'U5߱;%L욼8/1<#$9)92{ @ @U&} @ @}XO3}`wR,Oۓ?\U;d}dqR*o *Jdx)ge뒚?:Lgu՛&{~Q_O|0+Oj!IMדINO:YI[]J}3a_'$JJaZ_hW>() ]  @,$P/+ @X5yYrI>͓&5i_OTIva|&O<*?ZGVu;]+N_˲g~MNvkuC*jmI}[$5zbSj"NSOWqTȩxilƟk @ @Xgaۨ  @ @xzFzQ_%ݻMYO\%W$/^ܡG*[-n67Zs&oL4ʾ 7$%%wN6MtJ}ʖc$[]{&t&z1dQ @ @0ic@ @/LJì ~ǡj&5~riRٛ% 묮*+;+Y"%i ?#yLa)J~ork2 IM쟕,O^Tݵ=N/j}Ne$_NKR~>~PMw[C' @ @Uq @ @I7z|qROtrI}#TJWy%{$'Uu;Z[OUͤkR{'/)IHjdm_J|8Wׄkg&UW%U ?Hj &wLjdmg9dέ/8ycre @ @U&} @ @9ymI!+5y^OT;H4|{RO?:y\RU,kd&;_߬?? Y'_|:'\TdQIMwJ=^xū|,˦&H>Ԥɋ*T~}Jg\+6WYn]ƳAr^rLD!@ @^B`\`ɒ%#˒=-gYڸd @O=:9;9==]4Md=C @ @ @ <>/ׇfxGݜN:7JSOmjf]wڷO^޲e˚g=Y[o=k~|3nÉ @ @ @ @`vL'.KV>wk: @ @ @ @/'/6}Vy|5z54G @ @ @FJAm{¾֟=uӳɀY @ @ @ 0wMړ?:]{ÀY @ @ @ 0M҃g_{~E=k$'hx U @ @ @ @xIz]U'AlnFwz\辩Ek} @ @ @ @H|<$t^}Τ:\s/mHn?z  @ @ @ 0rI;<99ꟖfF.9~^rQWl+ @ @ @ @`6IOړySb%Yn]t0 )"@ @ @ @' M4ީ_cΛпl @ @ @ @`(^t&'[0u{\O?99/_}[b\ @ @ @E}i3eF}č%NH֟'n @ @ @ 0?NړSY?' gݴd*hG @ @ @:> U>+3y9W@[6'fm5sB @ @ @Lvri,l29:;y @ @ @ WJkSc/'olR2]c7I{  @ @ @ @1]ʤsܾsEkzmTy9K @ @ @ @ |=gm2ZIMSIקZ"@ @ @ @''o+H_~+ @ @ @ 0;M_} 6IM&ϺD @ @ @xHjzM~/Ok=\;&4uzC @ @ @xE'mks%? @ Gν6R @ @`L"^Qzͥ]{>k7ZW"@,, ~- @ 0K56Kr9{=ןcuyzAn%@It  @ @ @Ko @ @` ,y:9%ˎځ|mtlǜ  @H.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O̷ͧ,Yv|l.@ @ @ @@O5gy̓_js'j^W?/jӫu~ VX|\=a<u6 @ @ @s'I_ʙ>4O֟QG~_qL+mgDdzgkԵ^vN^NsY=a_p G?ѫ:M  @ @ @ 0{tҝӝ]>dɒvp{e뮻I:ۣq. ̶J7 VN;m<}6U @ @ @ 0U @ @ @swK/١ ˲]骷9%-\5gݺm @ @'pɟyy[kʾ{6S6K溮.IMshk[6;4q-m4ɗ7Տ4޾iZ/Sɱ/c +&>Ǟ'7]z}weͅYfgeLg4oąǫz͝fg,xnfR4;  @*`~=0Lޛuײ}]3|^NV~^A dˣ{r&gw]pl׻Uo @\`uMM7G^4}5GI!/h~_}KӼ7^w?h۷itr< wD2Q_8OM86vhtkd?4_uxl>|1wi{.WfՄ7Ny7l3O}.{l_K{~cKvi6X?WOf;?ysvXϾy vGsU^wظ;k>Ki3T @'[xZ7 zף$OSM*ߜR+,uN˰%q/7m{Tվ7v6,  @ @{ ƞ?7NW?7L;5Iߙ}w#QLl]?k>5ߧI*2حWMNՕ&-jrVǹŦ6 /9k{e"bLw&>fUT-<|MG]_]5ݦO @/&¸)MSϞzuwM&'Mjq 93О2䜵䝄s$Ww\?Mz'zD!@ @ ~_tv{XWkq OoMozLmmS=uSGYL.K~5U3orV on-Z]W+*T~S^c.;srni[o c @/`ͼ3ru|$yWS_Nݽ&*KK[,2lfGf< Wշ&Z0oR5_In @ @=j{+.Z{J=ڦߧ߼^ry1.hͶnu;S(_mX:iu}ylNv( ,V\r5qѥcu~~㻏+r[ MPQoK{ͦV @f*`~rc׀ߓtku[%5yޤ͒$?\ k֏:g!a4뱇n>&/%rǬeOɩ:}ln^?G\/ѹ#x:Vi0lfyTyp֤{"$}w9"Oa @!>Ms4ޏl?<;˚M7Z4Ͻy.l]ns6}Z @0i?sZ N5y{`r +iWZ.PԱʘ0ջ^ԓG%}3۽Jm&k: @X=>{n|7Ey?4?3k&_'f4͝2sCפqcY?}X^"wvɧ/O#%:c[|u4/{yk:x6hQGnj3w͒.m9f|t}Ǎ7?gf>fͧ3O\pk'dڪr[oؼl6p&:$@ 0 ty3LqtqU˧x|{W|Z›rB=fyaS]xSqߟ#@ @>,Yde%{Ǟt'*56E~~L^6n|I^m[3CX,;&g|zh˯\\W͢E)_92s'ث3λ.Oli>sC\9 7\~n @}οmxkh/Shr?&s5a_ ԗLYcX#@ @Fk'f禹95효n.*|u[u]v n}=v`'5 @a0i?;S_}O2ֱH-ksS~˞2ɥk_(0 fIyB @ @ @0 xrNicL4ġGOd&9me}S慎;L @ @ @ 0@2iz:~gp^?OXˏѭQ)k t ɦkvyҚ--a' @ @ @*`~z#k3*YOљ94s\)k Y7$?7$ @ @ @.`~wˡϞ79z}=.٫a Ofߟ\2Ë;;sF @ @ @@c[NtώWCdz) YƳ~cr`m2r]S91 @ @ @ NS=8 ֓Lnɯa)צ#jNS&f'HvH797Y[Y_A @ @ @ 0Xk"yVn=7$HX{UmK^:lN7WK ;ӓ^eTu @ @ @̞ݺ=!LdzuZW.0jfeEY֗I$RoU @ @ @%kȿ|6BrA2Q}~+& ;eIMחQ @ @ @fYOM\<#c#|1pVHnIZ_!@ @ @ @`6i?tѢEG%;~+WtxAf-)2"T0f-  @ @ @X`A]LؗO}3(N:S_2UHf K @ @ @s/0I¾}{295clg61hv1ٸF{L^o  @ @ @K``5L!@ @ @ @'0ߴ=%ˎ>2>?3fp @ @ @ٻ2 @ @ @,p `S@IDAT @ @ @ 0w&ޕ  @ @ @ @` _'@ @ @ @0i?wL @ @ @ \> @ @ @̝Iwe @ @ @X&  @ @ @ @`Lϝ+ @ @ @ @0i?O @ 0O9Gv:N @` _'@ @5[%{_?j_ @" @ @ ,Yv/ ]` ^uU͕W^l1<: @C.I!AG @@Mp lrV~1i;L5|;u?E+V1 @DE @ʕ+Zh3=Dg?٦^[6tP2}oO~,^9^cκםNus4ok7w=qBR/~lfv6c  @BBL @ \ kO~U5K/9餓nЏ ,_|Մ}u%4_lڨ{\o ЭĤׄeerIYi~ /pՄ}k|k`9 @''  @ @,]4@~)Kӟ+:CGN8fɒ%d#y'_ooclקdIrA g4e d_ܼ{V_\  @~ xҾߢ#@ @ ҥY4Ft]}swՍK?&/HzM1wNR9i=]>?ꪳI @/&¨ @ @ 쟞=wy]uüYW:]_;eIޒ2ɩ`r`D@e% @苀I0j @ 05/Rĝ:-^ߵ_wWݨnd=:Sk_÷K'/L^)!pqܮUg @LϘΉ @ @xHz޽H_7&z~e,POa) @ @ @ͷbЙjYީ ,p۴wU} @ @ @}x^kO~}(5A:/IKgKwdIvtB`/ڟZ?`> @ @ @3>]'YS;f˺<6'g8{)03X/'tǼ筽|ɼ @XgJ @ @A 5u{$NQ9dt9oca(I'k+Sokk~mfGIMOV3 @ @ @ 'WScOםf6΅_\tuT{[=簯.=&CM|Mշv?fCٺ,xE c @ @ @`Ήɷf}6mӿɖK|QhtROB+fCȩIٸk @ @ @fP&찔#ґdT9;f{Z˓r/Q̖ǒ( @ @ @LS`Nri7ßƧ:Q}܇ٱ!n{ 6;hd @ @ @#P]4=Vݿ=1?]yO 0`D~bD @,^5 @ @`~C-ZtTRO+C"rʳC.]z\KIɶ0q\?_hZУ 3c @XoD @ @ (J.[ed&ӒS[0$^|f|Jyxp޵;W @LLD9 @ @M'InRgc8Zf&Y=1Y PG'nX(f;;:k+=%@L$ @ @f,;RWr  @ @ @` ,6eGvV8>9*I$.8'tV-  @#'`~n @ @-pF8fg9 @. @ @ @ @@O=YT @ @ @ @ + @ @ @ @&{$@ @ @ @0i?xcW @ @ @ @=LdQIl8~/ ݨ-b؝X *؈`a Wag2;}s:>3,{@@@@@@*^}@@@@@@@ POD@@@@@@@*^}@@@@@@@ POD@@@@@@@*^}@@@@@@@ POD@@@@@@@*^}@@@@@T h\4jzGi^Ct֠^5;&f>S mAZv젝v_h 7>nknYn; k$G6MVvhۇ}zѓE @@ gx}ξ8    ZTg׳' Cfuڬeߋ'ِYV5V<;ΉgsmZ7\}ͲW>[ ^l5:oǖY8ߎ{&u}۔{&.yg     o; ˽ڃ/Oc?ğ<ڷeg,G.#lhuk[MmGois=rSgǖR&0["{˹Mo֡]Ýۿ 6Zwܬ^`%`QPK_``jn[QI6y2;qWQ?+_{j 5(ؗE@@ 7i3g   @ ,@T+{iIHE4K$*K [?s~ۘ-k)qϟ3~4kve $.]yR \_q6fVoEy}w_c7y=ʌ"#-{>~0|MfywNNiaC!c>iX_?;vAC"ov-zE1 E\{9_@@@ZeKnhK)W훑#{;9̙bvkfևgϚyc@di3wش|zfj =@zn {I ^ZYV5mGb{W/fiŶL^Ȟͺb0@rL}.   '0!4Z[wO~f=[~6{9BUՓxt-vOǬz~囷ـF~iVپ穀,xf.3SmgY[ZTG׵sǶfmB 4H  @\X@@@@*O`(/5kYX?O{~tYobzM63^onIz^h?{̯02uAyl:l$8 ho6zOS|~sW}>'_ @K/lYO)ޢ}zs4+@ޚt[3hѸo]kE kAbVK8oz  PX_   +pf_vf{+DDl J8l?}O3MC@@*h_@@@JhVStVǧ]9}W~;{Ӽ[7ir=[ͷ*օf?3d6ܲf/bZf~`ޙ>q  dEfћɩ    uzJ {7hr^f+S0~ٓm V.b(oPl`x&f nG Nh dg.?TWz8W]*(اC@%ns   d\~%~{z|B?>TOl=uz&firm7B^fN4mn.l}>xٙթ]g  @ p{ 9|@@@^f]n5zdWxOD'hԡ}ٔ^6^Ϸ1!ZN;RaO=pq~rb噒=.uu˰gOkmnE1UvpSYu^   M\@@@6msT,WѼoSUxi`zŞu\Ϸ-CiЗL܂mٕoe62;̎OVvƆiE`,3wr=i]m_I>je3? wɲt@@Lh Lj   @. \ʳ>!ٓ23OR*REsZ/sܽ -0/j[-^YKPc$̺,zl#z+  )m!@@@@ ^ 5ne [ao_-t˔w*_؃/-5lMէu׬fvsS˟mujۙ6w߁佯ZWg網{_nԫf'ĺB9mNOX`Λ׳NhnN_&" /Vi     &)={7·}?N5-E"mG^ pm^;S2s]t${aȬ3,_G/S{NP䃚XZy >6~׆%K ε};ַ#ldNiMG1@@$@O2q0     8-VPmzttG=?a>l3i],6WO#U|ځ;5/j=nƶ%c{C{q,5~=۽U2]id_{o]ur`3" $!@O$X@@@@H'EǑ젝(uXS{VV9\/׫ggvAkT3۟,&ჶ~Zִa55/{oߨ~5[>n^,/kX?XW@@i_K     i%0qڲ_#>85?WiӼլ.}κk֊ ߦlRO eM^g[&h|  @UuYr_2C!    @ 4,,YP _M-[m0yo?Yx>;^oָ]<6=TX:^П0.45>G~&   #??SRZ)j vԸp^ Vܵ Q^ݸP3Wf+(*ӕ!J*RYv)?()4@@@@(oҠ<[|eIՏٷOif7[M>q~x\Qlõ,t:ӮU I3CK2  @q==mRR6PT*}ޫ)z&4o= U>R6Rm#X*^DK |s |쨌QR*ɏٷ;LyOmmdM/@    e8fv ;'ڱ6۳fٶձ6]gunjޛedW" ׇͱoG,S;5sqу:㐦컳^[//_Œ! *POa-\q!?}MV-N x/ E{Cӽy&6ޡia+aw6xo{B!    PojwUztmp3k4an]ˮ=n|rJlzuaMcE|WFռovvʇ"޸}m~fKt:  @IE6flxoqo~z/U>V+)J{eSF+޼M1J׽EIU/)S@"DZ\lxE/{I%hhR[P^Q"oou[\_1DۭF^ZލSU*W+yINZڗI/\pa+    Q=.;]z\s?eyy_nr m[}3u֨iy_cD>um؜mX!  $-P#% zAyOe2N½S(^ >CUQeG7r~oO+)_();)/+jiC))^>LT/VnS>D<(/*((~G~g鵿ⷯ tVP&)O(J~W 5/G-Z2DZ~OU=cB ZIN~_*%vJz{T@@@@@^t{YZzͳQ֨n@@(hR4xJ/6W7T9@y_o-EzW>XxU|z*ͽ@=pc?u ^ EiVPRz*/*+~g_NjF*ޓ (y8kʛ_mLk_9[M{srwVS2u|+~JN ?yA@@@@@@rN}Cia%=?-O?pW7P)ޫzseG\ƹ)+^RRUU ϔ} i vS>_4Ǵ2EY[R9] 8%hoi^[x+齉/Zqn}-OVS2u|+~Q/4bal@@@@@@@(04:Dkynŋ)~w/Fyz/{ ^<潵ؘeT0#Gw ~^}SJ&LRV`B-lۮu3JIDZ{Q/%k8Co%9%^ǷW_nA\P0      9#PR>~7/zN>i^xAUX`F o_OZ1T9Dy늟M()AcDz)(nmkB_<$ ӕVJ2:mY[#[)F@@@@@@rI{mM ^bŊ2_9QV9[x1[+ޛ|kerPlT}R._*^hBCYx-sr{ܩVa)%[4`^ŭ+wyxDd-w򴲩w LiJ{KZWstQ"      &P\>ފU\#jW)7+^xzRŋ]gIH"K:.Ңˋ,f0-X& +*xo7((71~wIyJyTi6^QO/ьm߆OQ˔dËu=T$>DM,֋ )W+ &      @qE{]\枠}͕&J=ŋǔUefl(zW{;e+d0-`L^o~LTVVhXOYK UyJ0~W:TeL:+T%{7i:KВqeK{?k m0W@@@@@@@ +ڗeV$j^sy<ڼ=*:ǽW__X0&2o=~//]q_… mmݶEtȑ#ߎCΝm 6IIvm}x;LM:^y3gN{Xjeg.1btRԩmf/ ^;\ڹ,  QΨS`@@@@T 9"_|aӧۂ lРAdɒ>%=A>2o2pt%}ʴ*^8S>s~M2-ZdÆ _|1V@&&@IDATeg.}/??߆j 3gڼy쭷2DWg n2d8p`삈Dȥ`g      VX>/[8;}ǔ)7E=Qy%>50%4KӪ+ |UhZo5@@@@@@X_?%ZdYiG7CLm"JxiڱY^i쪕'(QH0J?;Ŗ       42]XTzD ByU6 dgT+(Kt̏?BK@mW%HNK.   P5yU[ݻw.3̛6`@@(Zk?6](dS:= XW_*>[1:oM#'8[^P^ɏc4un{⟱#~1Ƒ" i-@>@@@8DGxϯp P^Xg;iEnzd^&WM{x+< 즌%tF@@4i(УM%=FD@C+KC0@@ g{o^9R0dd>3R*~ LU|kJ= )^OdG5eim{   O˷B `ao@D @ExוhP XP9CY9g/WLϤѵ !Z_+)PE/[y:_T#8" @z >2S`#;t;wgΞm~65*E[d3 PFU?q}G@ F}^2M{o[@_/zBwk /ɴvml{~:cNP%?{:l W»9"@ PE nAC@t&^j {!qUv$PW++֑hR\׸}& F1̈K8  @Z di t{Q4½#o>|j䚀a$+~ @H[ip`   ?rhi<3,$1d%ʷʓڞN ~Ÿto~.ץAr|EXG)9F@@ ("@ \wUmT͎+  @6W醑 /#"QWſg?}QO*C@ F@@@x-Oִ} B!@R?h핏"Kxafd  Ey9Q@@@ܢ4E|>!ePKE5c  Y/@>bN@@@hw땼ZOkܟC=!2Q@ YZ DeAd4Ϲ12Q@@^}ֿŜ    I l+"K/JWeqd @yk]ёjſTdl];a  $+@>Y)C@@@ {ԾR֏$<( ?i /m "+(4\` /(  Pi+!   vuDw(Rz!!0C{ܕ`i0MytVFٸp3 ym; x\l H@Z-77"użt36>w|{6~+'X|mN'غ[m~{e  @f l>_3y" b'qEadVby^ (FM|Ho*F3 Ece@bsA@v(/mvmy&?Fmǟy7{f n5l`u0Nzya`+1@@ SNсz}i{LgH7{׿9DQ~p}R/FlN4G) @XmM@ve&}@;*lq}֪es{ " @ P-( 4n-dlaJ/e?M/cJ}~nx"A`=GXYYxLg@(E2 p=svݏ_;ov؄Ǧy;6i:n{lu  t($85mejyLBY[eN@v&쪌`r]TW.۹d@@`E /ݏ?&Z5ZXNEfnhO>xm&+/[F1ڷ[s4@@2F`{?~/Ը*R%S0( @Z ^QE,d#V/JtF@@vy4@4ZUZti#z걱Ba]pu9S_wmq;n`2  !pqvpJG*?D3d:p/>xpwK/(~ T - W}?Іk{D|   EX:c-7]*;aivemavX}cn@ U)(z*i @ jW.Lpjʴd\Q J%|L xh)grS*4@@\J @ [WfMWϿʴLxkv7ڜKIس~h?BCHu^h ʕ {!@  ;*Fԟ1>D"2QR%64P @Ms @L`mOhn Y?SIR֎^ȧ!@W֧Jָ?ktF@len5w `@@tNp @իߵy'^~mPlޖmlSd~F dy Kfm c+şi U/PKrnCi~FyLBrA`NZ%axoVh  Ey+8@xjgX\j;\zmwAC?%QVX8nWQaY%Tgrav5ʡJ*\_ @n;5:%?:q@ n9sw}oa_:@@H i6p @bt3OqmZJm?T(7+*K @ -kFv3J~wA$9{$e$c|"Ei\@ꯓו6 ~Z9.֯`_EC2NPwa @ 4@@}3 .ԩp/ eoVyO))тPB9";muiU W?s}nR6| yUvWJIߋkX˃~}\ZR̎ @r@}e=łe]'%Klv]w;c}̙3{׾5.׎Y @x_. sCM{WM0I  Y~Wms%YU9Th}SuX@o0PLֹW.OtlnS5rxj߭qO# .882=S+~#Vn~Ob~ *^ϋd@O}}F*^>Fa ,Ç$;?ܹsm6voLkyFX@ЫfE %ڎ҄ߔn ?_FuG jocdV\ 'h -J 0\F/;灠 Ggӂ" @ ޳*[`[%|۰c5J ķ0 ,+%jV4}cѣGi]r{@at޽S^^ޓJly}}w~*+vaY֭WWQz5v@ tލlZ/f۟9Bh6'_D(@|+mB>Oh{Do;;+_Mȋ3R8M=x/h{]NSUO _PG@yҊo*#/_ (| d=ܣJ`J1yp`s i @e^zv衇9cڭzĉ'O>Ė//Km~т[ T2[F~-ؿi;(PoYe˰Y1|ʲl1zpޏ\/Pd`HӼ wBȂ6o ˕eJ.2+; r :*Fo7p+o dE}oҎxOTW +i @ei=zvF_+2GJoS~ kȴ`R $!+ d?`ff\WKky/B@Uf;v.6lU4iR~+䯲@hS.Ӟv=/*N/2}T_nY$ 3o|\ ߅16za2q)?'VDž.} &,!PRGE\DhZ@@ KjdyqZ'c0y{>h5WVTR^vpgm40E{{@JiYj_tD{t-;Ol|h-?#ن͛Vg]#W{lhckڼ`%Y U}zehJ69/DvL/HIEꋓ1xǓ=ڭZ%C@TQMhܦD/h.{& `@2_V/^+L Y'ᑍ>q4h~e_U*U.xeܪ;_נya‘]x×? @*Hy:գVv&tvkE+, ~Qe<2/WuX|zIJXY(з%+a* j\e/hmu~vpݡ ˳(Q-flX ^, #Veئpa/П_Ty^vFLѼ5ehFpI " @d}옍[e/eۨW]e@,7(^R)z,^~jǩ~,k"P@}ՙX&+xvL aE{Xo;MJK! @ȞSL*Avd?~%qi5qdZEVXi;}&UmTN f-cg~ͺ ܼ}FϚk?ܳ]붖aO @Vx!/*.Ok.lu$WsӸ,hJ3w0J*+Kʳ\TmV(=u˄6?{4@͒7Nc$\ݜr^d6Xh  J3ľzm=ve[VP`:v+aBl_'VG vhZŘ@&X+Ѵդu4kJyzsEVDcO*SLM~ċ~ "~ʜXza@HNdq$S+?UR*˜? 7(5-m!#c.QՙG٥;lFc=4yX @ x:#X{ӿ]U(Z/i4*J >s 2yiŋ^9BvzF[yX$ۚiA/(_rrXiwYe@@ 3(g.G/d =?( <%sES Ξn&> d7ZիjW=k5E3d_셟\-x(eB|ڥ?-h>hT@6|z wFi=nJEEJ:|06EW\:(F R| d@e*NAS(r-=S˱έXUH(Ȅ3T@=eȄ9V=拶"Ig уmWr0U/pb{o T(ZxAߦϡq$Y*ɟ.zO/FdjV~ta6*${Ŏ #(eh5:+))h;=grAt  YLάh+B(ܤ#ϖcVP2  @ m5anl*k=2l| @ S@oTW?UG)o)셐f o# 4*Z S?s @gƇ*([/|[gr^o+++[(=p/'0j #Vʩ e4`W@\Uo(NKei,d8GO>?=Co߻aXE{1_ש c'T}m}e[[iύC}}ux%W;dRYxY9QiŬd@K` y:rRf̶K|n.E;6[=Us.:ɯd'd|" oV̭_Egdnq|抓azE (Y[@6;g#T=+8L/\jys-PGk_(O(_%0 @ ?ZQc!_șKQjbϹߨy/ A={{nOC%/ܪ&[ϫ>'o7z^|j[dyU$&[+}Um/ M噭5*fc4(bWdT>%_)  @;av[zP}7?ۯSg-ڃ=>뮬 3z~6ܾ4jUf{ݹNA& Qר_ώhؼjjZn?W}e#؟NyE@@@@2@}IipNPf1&:_L&tF@9خZi3[w;el6-l영CnXpg;ֲ~];}l19֮vo.hܡ}ygڏ=':ݟ}: kܸq0W(EGJY)cTϊ|Zn\վ-|?g.=\̝ŋU{犿K]s @N L~>hS{IIׂ}pr_h`;_íF6 O`@sԳc'Ӟ鼏]66sbnխYwWM{t4/׸2nު}}Ѷ]x?ogO?mSL &    dE|*E9ڻz%ҏ|;V}+}lܸ^(Pd'     ,@ѾdAq{g s      @>ޤ*>-.ʼ*>TGmȟsͧ! @Vֶ7mW>)O[۔ mʰ2,{wzѱZ˗/ʑEf0    E4>1ܕǑCn*zl@,rm9vĀb>z~6vir9jj @@@@@tս{NyyyO*~;nsz5"Oed@]/{⏿X@ yk[lOΨl֢=|I1NR Pǎno~o_T`:     @Z PO%[ NZx\"1H]"Ƒmyei2DNTdoVn*fG3.*<j(_+K6_ekz~o'ܺ}x,>'7D>8״25U{`=t1ZZ5e}V\`͓ oFpWgR    TE-v}+|*csl;ʐcHf(wRR6PT* JzD ӸHE*)~ڡEp=FTY_9\$F/g?fi-lݦ-jV[met!G+4@@@@@ -(ڧA!@ 9'PP&+UgUSqV?- VQ~R:+~/d[lEt½x'@@@@@thnHxzt)4=FygתkU={.7QjX**~鏕o7)Ui-ԿTSF+޼_eu[km`PaO; v^P"sW.Q>R|'*PTvHJP`-W7Bi5p%Ro?M1x];_lqrQWz)J2hmkGe*K_ᄆLyY{E|2D!    i% @ sT(/{$m (*k)^Ȟx܋S+^4QeSf*^Z6 "wJ2簱s׊߲z]UxZV;O5pŷ5CyWyN WrK ! [ }%_yLYP-evr~Jor%/W6ߟ|i "    i!P#-@[N)6>pnxދ(*( he2_÷ mSʡg^Z[_  /FvW^PQvS(e2R馔vZ)(^Eޢ=֛'"x1[i_*5_-̉zuhbIwŌ)*{)?*~m~B%ؠeb_H`x#þOV/v76F)(Q "  Tĩ~Ishfo}j̋_9 d@#fq @ x9(sR@{a{_S(^F o_x6PZ쥼sKūߪ +[F6uWrhd^q~z/GDFZ;QQ^fiъ_-5DWN)B(nz$\ ZpkEV{rrXX,Wh P1^?MW{& @@@ SSoͰmd_/_rls;쁗}?]0[MMd֣kX?Ͱ{:%۲vC^ַwi~… @Pς7S@شמyO͕&J=%xVc}}L^~{W_PG{ W*%3ޛ}GQm?${/4vT]; `ۻ>QbAE R(PB nvdw3|;w-;wFGȝiu.|i󕲨kXҰ};,p6~>vOhXa밿[j,ְ4ɖ%OXáz _]'xE B3' /ݪK4"(   @).y.M>ԾE|c)^CL?o̘<[[A)֛*Hn; FB@ RC@ z`ݿG5;!Ű~ Ê:,|‡H%k$$!6 ö8^}>E:$`?a{O     @Bw ~Yo0M^([F#ٛ $EC4Z3ļe;SjˆO,^)աIC@ B  =ZdqvKz*I[c .2jw Lb^c.0vc., 1U_{cWly6s{c7(לY[g[ 3Ĺo~<^}s]r9`0 )`''I  1(PTT,"w-H ow o=7 Pa/UWCNj\SG&ȵ4|do#+9V1&f}= O$F T17Yo1"76s#NĻ@esc_~[88ƞƨv\S6KLj75-dmmviZ?UW_ڷ"iI2k{ yަAq]ou}e_#s" ~zڻ(!  P6T_'%%M`_l*7l4N GHHI3MxOd6_{^y"o,EoryL,:~D" ۊ8G2C/{P;E/D*KFZEcYW17}"k^׼f)PY\>қSHd7km3}yLw7ڨY%YssA2Ѩ^T=bv{~u*$@M /A@X`РAu.Fc^Սa])|#xE /w6d 2JIӿ[DVk dN^~"?#r>xwI,ܨGE\6YCE~@?[X'Zsj+V6{>0=[cl˜i#}3>_BYߢ@IDATY29]?/e˹<״%sߏ@>ꓽA@]@[FπEoitU @ayA{^z9Piazw"/DbCQ;[U_G&KA~D_gz,*2 '@"gDzތjvhXG]9k_sWzMZ:!2-{Hf2O1zQ܇08hjH/HV˓]gvKe|Ư_qDj=ГM-BI'ZCh@Wur  !rb/k`8 PyFZzD^fldO*f6_$rEr/۪6j/R5m]E7;!:|{J\Vcwhs8~ϖh9^dG"D%1q$"#Ho, *$-5(RRY4+ x  @b h1Aj1SFjx/ P^uz];J{^w-}ܯXC/mbHwmh/W/gF統@oClȵڛn MB`w:R^-1VZ$ͺO#:LwjkR?/o֋ދ__lWhsz.v7Z  [Frӱ   @ 轸Z]> /Gj *0'5Odikmmۋu![ z+qCZc P*㘳g='7my(fc~k^#~~WSD ֌]GJ^vhs=οG|̉  PfLŌ   {e_ ?COi\.fw,}ٍDa_3g$?YV$#cn".g&ўWٰ<%gX1gNuzWm׉=ώUE@sD>y@q5ayV%׻{dw.G@   '}4^g[ ]> #Pw[?ia5ц+29}ig[qm%!@45 E8B$w>Aq6ޓO Zǜ=.K=".qNJX4ڗP9k/L(rP_e@@ DCcv@@{˅S4:=[u_1Wџ$?OF{2:"r`ӹE>ӻ\Hn! ,@}d,  u2A#xkڕR BhHcE>\IӴJU鷯x{Aow38!m\ Z܆"ӿ˼Y7;McWhsvK|<ȳz|3I2|_e S Z\Qrs{mP$&;\rrmg@hGւ  ֽgv[ 1HJr"C՞ymvwjCfC bm;_EVyCT zD>(rԕ0,feR8;ǜx4Woxs%D}F U1g׾%"},2#kD.~f|u|Mv|;O¢Wv@H h)Y֋  pm9F#^rAB= \nuϼכU_=ڈ$4败"i"gv=^dVoA"eM-H!Pǜ=G*+e/+㘳bn,rHz<AUYJwu9!{g<  P^+r  "pt I=SLBRD-}e !P^ʱ\yu̥Nۗ,O:Ɍ})ɳˬ$u*w'M[ 䑷ʄdت\~jm9P﹯~͑Gd4'[/S.ի&˅גΩtz/ܼ@~sMrw!]ʭוvͪ]F" 4ڻҹh^( 1.?OG g{Xu!QAB@@NIstMY!_mZ+_=Jg&m\?%hetpui(M#7L6h;_m ē.͢elٲE~駟$/O+ƍ G2OmDlڵ7ԩSc5` $>u}1,   ; e߹r!5iW^[|o]ݫf$ez;|_ʪ"ko 9גўLfRFl=OjϪ,yEݔ-X>_Mo&zo=" {j`{K66lXý+WJ߾}=o2|j/^,*K]3y/ ׊ ۷믿. 'f͚%}Yi @)PŲO|FEC5ةAB@@B`ź|~lX`o#%-K-Ӹnv{xi'eoӖm7o-+^a٠d$@M [1.gРAk<%4{%-ô;mSx?`v´جy7 ̿ʽ`yDB}3 @$ op$/ʫܹs\E#8bG^x%Eſw·bք  AFqm%Ŭuti[kɮ3mS3Erxfh*i_- e轻J+ @ ʴKhlH0 P#p8U:WdGs+h ;ƓE@ `oѡC֭[P#/_4h,*G@@#` 絪'I[I}GVȺuwn!7x\olp6Uno v|f RVʐQge@h ׸Q%]dc_S=0كxBCƒ@H<Oh]_< z D"ck3F۠Qgϴ[ d"| *@OʩYk}S9 /tA%g5SdVO+%*mD-{e Պ9BcZj'~d@@OwȪ/ϾyM    `lSQjTM҃4eUK~w8G_-t?k\ش=~ƙ5*,%-m[6SeYo5~pgj-g4y8f% Q%]^?h)3"W&,B@@@@>|Wԣusݤ34jX}85g]- fk8gKu7?0*xKC@r ۦ\e4tv:J:6]RhliW7=vGjdZ29\W9 @1#yfrq2d6x=?ʇeA{âr۟JcI'H_yzTYP fK[m m[x;{$'%Reҕ'!    &@}ُtS> :dOר-C 49|@i> Z(^v4 4BMmu[̏ @ b,ٴų?mwg%xߣiCO{Yq[V{wv[6/-ٱ3N=ZiulZݪڮe,r緿J~aa82[(po[ ^&^nu\t蔻(Z7_h-uCEnMǂ ~70ֻǸ=eW5Nhq( kp4^7>?6lHp*-xHJ&Gc2(Y@#YG}rJOr1reCz3RK{s'!uN<=u7u [&ͬ+QS_}4NӣL   !ШZ=m er>eqyx[}ԽZJ{4*,*.iFNڀE2xZICqE]K*{\7(A]Uc.T1毨\EY>TPŘsdy@bG 9vZi%m[k7[_8L k^UZe#H(`N`Iƙ\a@@ v^3 @ 4^5hyߒ9/ S%5E6+2-Y?O~Z7)W-{թ8gx`5۶ˣ"    @ hoOV֘aBÆI,fPo po ֐5ґ:p/+ ,PQ~s7ʳpf9\g͖%_4t\[.n3dҮ     ı{ܓun|@}4kWcFZp RrПjhaǵnF   -` u3'sJAabi5[xٲ3O]|VJnbmrߏ|Y^@@@@@ Jϙ;_]v ,Ra]Rx1۩lSk4zjtԠ@H -` ~9+{GC{?7yڬѢ.8+rMo+x"pj%mZ4 \n>    'F4XÞөhՃ Έ%B h7CWf @X=vuvԯO\<Uk=]Z,i}viy.ﶷ<3i{};WA@@@@@ b`o@uy$ B 3{3te@p@㛋Gg&R0Ojr ^p jOi8E=_g| Q/ !    @b D q6Y!؟hlh x;B` @p wn[úzG0     #FdG@@Q. AK}8{]e~2     =Sm@\0o`1,t@@rZiPGfz?8~;{)[E@@@@J}Q@@D9zۿׯ_4?     %@}BU7;  &>{.TPT$ƍ@@@@@>=D@pA1cD     4ww  .8c[TE~<     4ww  .H<~./9C@@@@phEց  @N[˿B_y@@@@@ ~hߺe@@bDzsxpoӧcd(&    WFʱ  a8s;iVyyy2e@@@@O^+@@HMNTI&Y>5h$@@@@+NUu3  @ Lm;=~`OI?E͛79 @@@@OW  @ dffJݝ9<    ďS   EEEbGʹ=z#F@@@@>~=A@B@NuJuysy@@@@i^  q#0hРѺ3fʷ#=u=]j     ā=@@Qhj^ "    @h'n  ĝӎ=L5"    @ hH@@VCݳ{W]/ 3    q @}T"  yW/;*G,    ĸ1^@@ :|`@A@@@@qc)>  @\ p! d@@@@ec(;  @"5oϷ'!    @h.  ĵhݻ{h D@@@@>+# $=ޞmE~    İ1\y@@ a^=- ۃtx<     4hQl@@X{;ֱwE@@@@ hZ  (c/|cY@@@@1hwTlЫ3B` @bF3-%gk]@@@@bP1Xi@@ !t:bG,    Ę@틊Ř6a6̲Ec=d'!ğ.:qd@@@@@ "hu4~h\zm`2fyܘn H`ef@yqWgD@@@@@@@ ܯ. QF@@@@@@@.:hD@@@@@@@ mϏX5    DP bϴ`Y5  .0Ǒ'       8@~S"-V       UlN@@@@b@@%QD@@J10G,    Ā1PI@@Rh/Q       DCn$KQ       "iM#0     [IU\J  @xRRR`fYcoŋӧt՟/@QQ2 4,3    Wgڇד! Ĉ@7s̓WX/K.Z0<     ~o@@b@ {x5j_>(_LJ^{klظ?;6P%==(O@@@@-Q:@@K`CޡyD}Y~ewnOOOeF     {hwoP2@@88sާ-G{^|C~;=ާsIO68 IJJGO{?    Ĥ1Ym@@ VL%.{FZh*_$֑Nv.ڤ3$7w?cGT}@@@@\.@+! Ć'_ ɻ6//Q7Õ#ݺv,^w=$ߤ^w:iV#?03    TFφ@@Iw٥ yq񫤇s @@@@",@}Y=  @bqӕ}Noe\8G]:j     vK@@@ @`59&     hwwP:@@(|M5ʓA@@@@w ht  q,Pjg}G/# !    PY4W`b̏rw;sB:;nߴ1=^@@@@p@ G@@pnݥeӃ&X3۟OKKG2goͶq{?ȇ     n(;KM@@@#b՚ ?ȃ     n5D@@؍+%7wjVbAB@@@@>vꊒ"  $0s_A ʓA@@@@ h:  *0kNp}NJ     ^[7 @@= cn}:ʓA@@@@ h:  *=~4ڗ H@@@@\,@+!  ;;wy &wD}@@@@b@F$  S󥠠?e&U?    ĆQO@@ )gv'    !@}lD@@ H`A'(O@@@@bCFب'J  @?O <젠<@@@@ c(%  ~ 9{@@@@)*%E@@ r8,r+Nƍ?G@@@@bCQO@@ŋ6вeˠ<@@@@c()  @quQ]բEתU|Xu_@@@@ h<  QІc:;;[6m7JMMMY#    @x³ւ  QQ\^y# пk4cf< 4 MԀ[:q_Xo}BxNJ,4D6eN= =_ u:Q/zq:ևX'/qa@@X8Q/yދj-EC^%uRBzNW'V"[/nyP'(v;5CW\SP'~ P'^ bR'V[+ԉtk`hz$  @Mcy?"ژ\[,RTn$ԣ8:'Z}bNDe+6bަr~ wxh'<÷.WjubeܕR:&)   ';&rk|Y@@@@bMFX1ʋ  vkZ;V 3s7HgɏD|̑Q$+rE|[$k#  ر!O^M/oi73'_\'[bzm|S,4G  a/t#ERTT\2i儈o+7@~   @?7Αտl"EO:W#L]jh}@@wjٿ;!    4'B-  .p@ՀX?t]qwK#ܢ{ˬKN-m^%M_Ȑ,1Գ'}3wqNDu@@MvyT5sMo,w}|{yϞ>kl[?f=\>1MTIwM&޵@ v@1gq=Zw^Mg}{N`Kz7!ӂ֝PA54Ѱ[QrEdרa|EygFL;5JN,u4\W75,=qgHzpp^h"  %3#0}h+p SﯓG/YUjʕ(\F'3NM;akiO5ɥ]%7:X^76domrϏ7K~a\iT8q9S\YrnNjraF"g޺e:%uh5"   |<2pL.74<}5ίUvlpֳe-ҰG Yyz_tHa~tTm\Ef<\:mk_UPOs)Y(KV$ҥF*y3IN:Hu7hXc \y{4S,"j~Ƶ֠>Jd÷jX5ެDI4|ְѰ0H]{o Zaf4=gy]Gt"+m̳ya~a4<4KNURҽ]Z^UKO*ԓCiYn+'3o9ϭͤ^ltmZ+qEϯi1B#I~4~L֨Ⱦ:8ZNXO{ҕ3k 4hXL #  aVZ6a"t%JaQ-#5C.lf:`oSnrlRP/7M{=^TK&?\*_-LkJ@IDAT?ZiBp   6J޽ϧf$Kˬ588S`oS^Svd[nU?m=- `o^d$Z G.+Gqr= O;E:4R垡ަ="k{{?M?h-Gi|aVhX}@e  Q Lfvʭ=lZǸ7 _SrRɟksz&j84j)i4ggPDcdHQgHOOA 'h:IЊg@@@->ܼzsk.IUzg6 k%֥h-3$9-IYQ]K潻FCn|u,RwK H)4XDDXDOԕXU.)^?F] 5N-LE zY  T\}jYot@݌zݶ:˷/"),W'5;w}z6(y;ŞQo]QkvO<>hP'BƲfk{ŽMQ'g}uB@@ >2yⷯS1%_2f7CE޿gwl̗¼"g[վuWh8,A~ړXD6.+JDXDOlX5cgXa j|^‘h"@@@ 2W9V9Ƒu@=h{R˰9]{[AgY7KN>{Ψ#'=S>i_Y?3pYR': 8H:n(_k$5 Oz^^EwUBi*NVmY!7L}4t/zw MR' 7d­˓*sMZvݖ4> v{'ΐE9[g*xl,C4YK*Dp]}y0H~g?_z"dɞU>0YX+uҤVǪAx/ˠo=:fT7޾h1RF~zI   >Z>bȓuC3_\o_}}͔Jҳq$#5SNlsL[=Y8PvoҺV;f,ٴs|۠=} [ȅ/?}nZyj҃ʕ -ZWJ!m. 6-+ nhe r{e9c?ˇzw..Ͳt\AD6V\~N|u~}r_{a ka'~spIQH3Jjbm9dk}!UsZ_$ 0Yw |f%)s^Y)I)ڸOu\Y+n IA,iګZR3Siud-2Ryis=Izr)-~Y̯^>҂Jp!{Gju"j|1U#Cg>VcFii hq։'=>WthXPw>JB@@}S?g 4II>pX!;_ɃG>#E3vgƧؙ4r7[k?˥!M"URxzOo_< Oɵ61 ?D"6u9\=>"0OB9Q/΂rʂ X_؍>ʮg(붯Sڝ)qO~ΐ)WԄyN˴n9a}Mmv˴03_O]5؟_FOH! l^`&r3ʢ#j,&_C}}ZY;+ Ez 7 g>$?8?^^!O_@R$eWNdl|c?(?4_v_\y}#jJ{cY[~v#g5<~b& F74nӰhk'_rEo԰ Fk6H4Xǃ_q/ԁgjw>Ů   "3tLg@C۵o~}D^:-|մ*AFRJə4ۺQ$nNU/NyBޝ=S'+䦱KzrlIHjr8kL_ Nϝ&Uk"kC=")ޮ@;P|wXi>~PyfүUr>v%82G83^ !rn{̇/YY;!$ZNbyRudK$95IN-G&wnM=>JRҽg-knۤjti{~}NT彍?XwqA#p_OD4LW֓խwi5*ei|a 5;4xȶK @'lc7ۯX/՝}x D۞=Nr VӪc)X u2[75x d텒Z5E>3CV^3U;b2`wѐM~|,ʕjM=d>>QTe_^ޔ,Iʲkհ75,_Uþ-ҨQG]HgK6["XYQF)4͏S: X   EN_ XSg0F ==CYe"?,+/O}JkCjz/ ${[^ujM«>[cƬggVT|H˖i2TL 6fKMk46d\E} XLY5QNv,Y"W{]<)jyzQR;mRJm˾2cTYu{O 81OX˰f!; q[4*;:cWv7qs5.pqU05[y_5&jX {۸g44tܣahjا}tR0 $X? ֿ}ޯaur8嬏b=Yz<\{4^ eʠŒ FYҰ.T<VOj]<.r/z xWk<]>V:=5"{T4zgk/v$?eY~;%Y{/f综A56좾2wiڻjPoz^[vаcF(VٖoM;›N,4]Lg5mjOvUbv[}6k6ܩp!  @y+X38r˾Cko4m$Z,2[jX_ ;QQ#d fixYNٟN V' vH##klɹ/oiO?BWm(ߞ?EjTK[=k9oK'{g|y`~%2Ϸ+y V#E#Wc_5ʚiR y3\bsY1޿^_/L{гre= 6TqTvְُ'hid'˛|9UWS?C4bEaMQ'?/Qns,;,<}쫞iԞFH c װ;K?isQ+=Y146Jݞq[('lOM[W}^|6vIwS)SoS^vV-^}bɾ-~ghغbWk'5k5GQU"BFiհfoc5b)I +t`7Fs ۫4›4vQ5ְVAWeava-35’>irTMh&E~A _'YɁZe_Vb+ƕ[FE^I/Shwʷ;5“^K5N8O#.XM4X`N5YXN]-аY:ٮE,\O{K55*z$*b9]OL,y9bķ;ح 7ie. No&k^tﯷ4^S 8Q#GF'wfc1 @@#p(v2fc^]=GclvWܵ:NLىR;9_9Oeta.,W8rmY=r-ݼ ?f(p[WiCYVA[$*9a]ݳF7C_ʕ$XVh`D6.)uW'Lq"脑Û;4xiiJƳ;GGTcE :(G/_ud1ޢϫo%_d{YmvJRqOଶk=se6H  wh}b?W`kSi󫵮~XZaJ>/} J4;[h,«56k^ܥQڧ@Fy+mlGP})hN>i~BvWs2YFݪA0ͅ>I4с3Th8'k< wo-ηW^`S.Ss-:qO_zj'ENkQTW3'VO 5j$xJ"M@.o4op>:I=lqxǔwʕI[MRY+ '}ɮ  Ĝ8Qj1  ׂ]qwpRǿGE bO,Xb;ML?[bFcHKaQAP"̍Wvv<̬K_iM~Sri \e?:;T9:I|P1c?]־ղN`Eq%*{5Wݠt#qyܮUjR[}`L1?wj{Pg Zh4ܢ4Ocqp^LQrԦ%V~ڂ1=\)+铭]6᣻L^0j>6WFWwH%ג?h o 3ф4]뜧ſ'(k)Y=#DwM;5}F5S}U|Aݤ .rm\/L~˛*eo{~QG$fj{xY=xn ]s],Vuauxw>~Vxӌ>.GǕܵB_\H6ܦ|"A~Ql{tʚ}%7ЯGp2Z]+1y[qSU3kG |=B%Bn]3oX7kW G30@@ &p.4|lp˹SKvՆ?7UδwV!@ł[yJ[f QL}! v-\nPwNިw}S]댷?zDKYQ@Dմ9?0OcӔ|;޺`V2{ۼcdK ˶ %yQWPOyRq1-o IlZb{ ]^1ԩ&N[0١nјuuNO9KX?o((}`꡵W۹2(izLPQܢ}XJ 㕟)+() f|<QYyi=>aNܺ޾|uoOe j Vu˛ ?V|Vݽ/iܯ=}+%wgqEYK\;oQXz1^7~Sz9[7uL7ĶҸ_$ ?k8_ND4Vzht&#  @aQM^q0G+a?=7*'"{[MouӺaR>|cJG%6]3)9o-}LkRYSG(T^U2i/iX|2S)x1CVmpٯu_/lLk +T.S\ˤ=9wLI ~jx{ROhrmvWeoq0ɡMV5Ó:|̰Er]kSgců[J&/Q(>1pWڔ{++Q٭+> {lܧS<%d_aWnM{wI3|f=/|ESÔ3L_{OQ_)|zEXů|*((9lmr.V   C\/4p]4ci L7-\y*$n+[bO.~ET,WN% CLd>&u[6 ssPMo'q?}%/)y[q??Gvۇ9{w;aW/W|IhߺC8rU/_[Qdڲ].1O{愖S;{=vr]oJ|z:2o*c_Ph[ 6>_hպe_YnS۫ȟUۏ^C]xIOhdڶt沄SݵaԞJ~Ovn'8U:kԻ愘,mdI ]]zw2m>5ɪ+>u{nRzBk1&h5_C|P >J% WQtG|bNݩGVu&Ga5   ߁XvdzAkQ=YxAQ~qދ߿l6-ݿ vwm^5=mo]NUjk/?8L;Jg=fSj2y֤Sy߹mztn1F'ϙVzXdAEޝW^_2Z8O:5Z\c(=oy}k2gn9 ޹O0¢Kҷ}hө<[qlˆ71XG:c#.$=.J5}y\xojCkR'}i7B~CvcEPߕp}U7<&j)[MiԁnϜAi/V6Duk9,]W]zkwTnp.kE}wknɬjnm=<ܮ+.N2φWT)s?WM6ᜭ/y˹Sÿ&*eb1=v0ok}f4?x!VR|+EO).w=/;q?j7h  @ͮ۴oH8%*:? 뭰a*cjѮH¥_ڷivsBc7!tm߭f =7s«_TsEo_mxej>cw}cZ(cMriFUmjFΓDc?'ˇh 1)P! I' uVޡ+ {=Qh׭:a'O _l޺waVǏލS m:NnL&j_RJ4켤T7@@l߄o׈凌@^VSwo?3kϟZ׾Օ>rI8bj +O퍑d' &Ϟ.ᚰrr'>=}cZ1 pLhcR|DŽ! @n_UShw~̬o”g-d~'O*t!T/¾OM>~}o.){vݏe7eG+WkRMwpZqMW*O 4srrF> @@ ROlyyl"'ЯgjvYhlpm)zc{f5Vwvr#Pcpg~|钚N{GV͎@@@ t]C\ۚXb]q=kؼ) BwF:Ow.8u~R{ |_lS4!'?J=zrs4|JG5M| f *UoEh  -x~oӈۥr_;T3|Qtz4е}0&<~32iHɒj1 >n}Q4,   g콙+RXOW'[mGwO`/^f}myamv]).bhG+!~R[X?uu״Sg$Po**]y&PeCze9ۿ? J>n-QNQ?~]|blx/ SӼ?QƳyy^gcR3@)su}Zm_Tf+>&ZT=y[4@@ͬ۔O*H voCWЯ^Bs[6ӿթ~kt+|%[ze# L 䎷n)؟E`Ɗ@@@&t>L{qNvcP|;max`ôfkx-G={r{a6%h补+tipekj+kC4!֊o?XSkgM_ZlkJNq8j[iE.:KM7]i>ܠ35=Wܷbwk;osfh  sUb3ոJ*PMxp=A͛:Bt[[{бM0lU1\U]1CVmpٙu_/lLc@@@6cftJ5S?ɡma] ^w /tlU3<龯O _4 j[le3θGeop?)]MrO|}/xڲ)h$ne}Դ衛|zZCEqZʖJTa\se.ڻ=^?)k~oj&ܑX4\+Kϊw{P񇹯*# @@SwWu/6VfREnP?wp}toh׺}تpfgN|pͫ+_$o! ᪗S7%ocR]]^JXda8#w04Sa   P|./8tV*C{Z~_Y) طWM?=j5鮩;ByX2Leec4ʱIl}DWg/@ImWp(wm;E((+n.ڿLRU+.0aGBTqtۣK 6հ?P|rʯJs6Zχ@qRF)7p< StB:2ek2Aicr5XG+P۩ӏ4?ͳC/rogh"{o)')+Qk$G@@kOE_%1Zk/?8L;ͱ:YUUU8fSj2y֤Sy߹m:30bLh~|䰚n`ɂ;@1qt f7@@@s̼n!'ɜIm2\;o> / KM]kݱIWz߫qڕۺભW4ԮNZ1M]u>jk5v}`s(7(=p2EXY[b+SՕj+;GyEGguۏ+Pdm[q1E}o{ANNԼIϻ*w(@SqⶡrSR)stL4 @@6Dm.FxDo7nuh! 1v]cwb6  @~.6=rV0?Zd]Dr₳C]u{GŅVJ\tQSE5']DvHlR(+ 4UNKZn>?i'lx?]V\v;NYECtRӰRR{21oGkQ|,}5R#q2|J6AC@Z`EO?i@@@@@r)o)JcztA=Wo*#]Oo~ŷڤBwsڵְODX.6-ts]IQYWSq\/3딨MӀrV4GvM'X޷ۏOD tꉞzLǤ(9@@8S{9w~uYlE?> \ѱx8Po|tLie=q)1)c|pVlwNuJ{1)>Cb _mrۥhq|v&W .THEum@4ޖG4s7՟EgG {Oj>S rQIK6/;49tLX  pmL'j^g'q ڵb(~G1?U,ҏz6X,ҏz >X,? lX< 0trXK? ƊŢXS'G1lX<Pb(~ EOS|5ʣJ|5sʉ Ǿ]~sʾ;OpVPR=lǷwt ŷ_[D'UaJHj('i%J:xCS6U~=;i阤A@*Q`%vܷћn޺h[mc X Ǥ{q=.{1=)MYQ4C͡XR͡ӑb9.SGXq=(g=drL\\F}8'5}Ǟ{VO=°++[9Lq?vr έ+'$[ށ'|i9Z +(QVUU|5ʮ]q( 5϶. WAe/QO&Ÿ**?WQ@@~Mb/d       *pv<^6T @@@@@@@B k>UE q       @% F;/*}G@@@@@@@+h#xBlm        PW ^#       oAB%^?1e        }Bx[`@@@@@@@@ ;j񂽇&Y;       D%^@@@@@@@ȿD`Dl@@@@@@@ݟċW6 {       @a s5ާ0f+       T@?JhNr       Nm*^L y       @e l^ċT&{       @aksKoU.5@@@@@@@*O`?r`ᡕ#       PX܇Jhv!       Pk_2)k@@@@@@@ 'Kċ.       +pv=^ZU.{       @af*4[A@@@@@@@ k]e       _`6/{xo-        PUx&a@@@@@@@(L` i       @ ծċW.{    h˕8묳WUU]-gQF=,@ 3W+{TS@&O#uOwk9Z[B 66      @dON>c'I}1 O$߼H#ǮZv}?1Q@@@@@ +ϊ|+G9 E{#FݻeҮ3gcdž5Ϯenc)׼ pU[ 3    4K}X`''.K/]61!,k9"@.֟x"(  @a.VVf2z  @Q P/r:{snjY5+?.8{_ݫ3_l@@VP'Kt|s}Q@@ ]ޅ]tgɷWy\N$BMry @@(6vD  @Kvmx}U @@ 7*"_IBJ;%6Q3H2AV⭾a_e||F@@(Vzd  @hC%6v%1  @+ͺBn%+P+?nb_Es>) @@^̖Dg$  y8\[WGlA@@k6~w%P\2KIyYS.U~ph  #@Ѿt=E@Ƚ@Vy|byi" /&Hly;EzǷ_dn҂W+gC@@%|* @ ߬G]lvua8 jgϙpyw<7+.BX.^8\k/E!4(!_604|elA@@m5/QU\|W S+w4emerS_ {K@@JR}I6:*dɒFw:j/]9,߳G;}8_npq7]\e=QÎuyn]-|GuC Рgni" .URUB)(k*5ߺJoe4@@H 4v+H E.ƳfSc_N,H S˷uĭS~p/Y m[7CIj@x|u׎%1(  PgSeT6֣V([ݗ3zlUly@@(i%}<T;޳f _._Q/ڵ uUU~Rfboa;}-`!' J@(1Gp_*~E@@ ݵҝ++(i?Jg!A@@eY0W^[nQ怰Z>>_. ^{吟_722xp Yg)ޢE›Vi @ S~_" .0D;"t?+yP\D@@@ Kt_gzCF`]Z붅 ֝;pʱ}-4 r'jW)|pՎ4(r_X}cb " (E;7.UiK4 +;QBC@@P"@r-0bt|ko4&4.P˩_NGrVq÷9q^,a>ܬqn=@a@B` V8NV\P@@@ *@ ~ao5McBQ'f͞W'. g@+k{x6>0  PaJT_-}yCH++  Y}Y= @ t=& βgڝֱ]zYïD@JMg}[+t\+]D!  @(!Ӿ 'a-6 ~M!c J\`D}k|  PJ]]理Ռ.лPW   @K PoI}PS5kcayõW֪ұk_wPu}uwSi @(ћщqF@@b@Rm䷚oBBC@@"h_D @V:ZU9`0l?Qt3RΗ߬!⚛Gw(O IhzGG@@e=5B*J~]!  @ P/C@294fO}Y۶mڥ <~ V+ߙg@h)]iqWh   P"K@M@@f J߉qF@@B Z.tۚzT+4@@(A%x2  @V.w-9ID@-AFκJmf|L$@@[}qz  ;_oGF@$0XuWoO"hmuyZY@@@(ڗew@@> E  ^ZˎP'N׼~BC@@2h_8;wLa~@@ک$:8  @6~HRi4ӳeBC@@ h_A;:`Э[lfΜ9UVSNyY?+E@XO{ԣFxD@2]w*t>*;g !  @y P/^UWWO[oѣGE7<.̕quaĈa 7~eZܬ0 &Ng@hL`E=[߯jqowb; @@X*@~)Ee=R{~uy{キ`og}6l69"~'98CC=je?S̏ -@ѾG@]m(j"@!   Pr&5Am&g'q &κ }ش ¾}*^… e]6F{{ )0018  Pm).;s4TT   @&o>Z!3U{|sU(g)>9 j{i`h  @:3}g@*KJ+}U}&K?O?D75LC@@h I7or7E#9|V:BIQ\)*Lۚ9*W33Yy@@@t(ڧ+U~RI9>Ar%reW[Q06̓AGh%6[IJ,+;@z$6w&@E@L:i?R ~@6>׳qSOC@@.@>%G[.@hDb%>!O~3nW .TV}+/y:sf=~.K7YmlUUUl/XN 9XO0 @ @2T䍘$?% /Vh   @ D/Jԫg=UiJjxori{*QZ> y[f| TPaF|ȱekcV,Tz>; O! @ Q] de] (yZ@@@Cܢ:EՙԛHYljc~ݦVyxf h앰@ Ok`Ljih  P|˫K().R2ihf}5J!  E-Tѳ;O*QkӘ/׳o(4 xȕdoVz@ NAq.@@E"PdrɗPi   @I P/UΦSMYMJ46>fAIj}T̀ph/'#@(o M%")Ez5    P^~4JϷDJ:!@pk# PHVLT^=rS 59m   e/kMs}σ׷ͦmJ@(=kKo1 %!"7ADMd|!@@@*OrSfyˢ_O@JGu5eht" P*_t/꽥s   HNCi~v6߭60Vi?=4@@&_#  d%"Olow+_T(    @C4}JH/4   V0;wН;勱N-TTO  Ppõk4ަ` N_rr0aB!   PH۵5mBv"mm(?X @Kvg_y"{ P*UTf(v    @K;DG*Z%Ӳ\! P.vF|S5'5ht&oj<ۇsQ+ŞVÛ:}K>OwGl@}{(R,PΥ3齎ݔn @@@ZHGo]TwƗZkMÏ-h- ItS~GWJwza%j>7t yL[=ԙv,T~_#xw! P k+(o)))(@C@@@XPGrbt8Dj` U<*i.JsQ_lZc*\gZ)>4U3tκ*K nƓwD,@>a$N%:7oX:Ӵ쿕Ӕ͕ @@@TXl]˦[ke0 vغװoAy~Dͅ#_Ivr2Ly\)_BM52K<+ EC-ӢsZJc}Hn}|߼j݊srvk 2Dxc䲧B@E|핳GJߪt}w#A @@@y^a| guQ6V|2KC=]rBj>40{ړ}Fʝʉ\QrR߾k2 HK ͵!5 zA]4@r-]EYl`yVzXN>ڟ L4   TWɡzeW55͌;(geʥ`oԉ.ܭWY4vs_+(6~e/2 Ev@  @j浦glb qBO!ߧ|۫?(|E?M-]wK%jih$WHEuZSEo|{JB7U gb?1ʧ hyZʏb+:_SMf@ɯSZ+w@H~+` ?F@@|8Ec@x|uۊs@q{RYP--VZ*=9zw1ӖSQQPU<4W'TU=S+1V @@/MkwwW2m~`EEz4V]2]/#  E"9`gH:D7Zarl9+qZ3rQyLiN}u7nR|bo2ʯ;SC@\ Cytl+hw@^$4 $V]A",t*~BpKwk!  @N9愨V-4m ր o⿻&wޅJt+, vU}/9ʃJ[}}hhz:>MU+)t#eu_)vk @wkٱWհ^'1 P¼BޯG3m@ȯiMה% @@@:p0M *v$](1}}!_r]﬜XNb79OC=_}}H.q÷I4ĎNM|  j|qGnh\=퓜E{MqeB!e&̛*.GG^QYn@@@M}QU u=s0|||= 5Cb&/MbtUmgiPxul>Ğ^:8xP}{RCPvyDW{].즼  @i uJt(׵Mu.z   Y Pϊ@(+C} ȷm@_7Klc''6A@W-PqqOͦMB.~|@@@ȉE0@@2sE'*g((> PՍ >*+Ni󴐋U~<     POW@@*E=>XW2~|)4@ W\|'`z>*   Mm:dž@@h!r~J_YJp LT~<M@ b=%*C}6k- 7 @@@hߢl@@ܬJ~X\9S@hEQzCj>ޏ)>)   @Q P/Ag@@PWop]$es "4,࿟k)F"M>xQyA^z?= @@@h_" aG,WoGw)+++4@Zzwλ(WwUmhA㙑X@@@hG# uŷſZq*vӈr$ %E{|TVfZ4|ċ'h   Er9  @ц\;]B!@ ==@@@L(gż  d/@^ʶz*\nu!   @q P/B@@ouTѷw*_Gw1=F*x܏(l>y]%^L   *@ѾB<  Pr7*wpŷcwCO않JI=,Qh@[jJTO>vNdwk|BC@@@Pr#r/-cVn# @.fi%P}cme=_*WŨ .OP(, VU~~U *'mVV$ .sK\k>@@@\}`v@@jI;掊 ;(ƚ- G"_ tyGqV)*B>~ż ~'@C@@@ PoaVk\UU ^ } o}h&7/ >hMvaлwSf+5 y_:%f-\0[dI7n\裏 A[lZj aK  @|}>*=ni+}+kDM[@Y[Iq὾¼>$mVa=yi    @^rSks*؎\`5sO7睺TcҤХK(';}]wf̘=؜7?Omd\ˇ+|we:~ZSk>رcj6@@.;*~TͻX+Qm y r\08R߸|bI>>W*OY7    @? I:+%.)iَxx5·ٙԨ錖~hߵ~Fi     PoH&wwתnR|x^[&j[3?t{>%!_{_rrBC@JQWJB}EeT뱯B+iN]J<}JTO>z    @(4: ;\F|E'eLc'M{>|U&]V1NEs   @FM>%j{]w|rJc4vUv^|q̬'.NjS @@@@(g~o؇\ Jݨ G)&hmZR%[>/[̦ ~;A7RW|SCX  P..6;oC>IOH?sI>.} a (?Ü)    P3?gksc?ƛ?;Ms|b= 4/e+?=^yQRD}OKy9x\@tF@@Jp!=*h^I;ξ"   @tHIa`6srF=C|dV[ӮK=f2fY&wk>AdtTQPh       Pb3;` N};s_aqƛTy">1Ïh\UyL%b3kgŷr*\D?k?nB4@@@@@@JH"_:,6nU6P*g).u<wZn=h]|GuJ;%jihG@@@@@@( .zf]N|;ES~O<2Xy&fڒ5Lm"-|sĊetg:@@@@@@@H(ڧw`|}fm>U)]yڠO"hyKu(V3 a+āJEO@@@@@@@(7},9&̜yKc̓jft<ҙ' /i*4fil       >kMhҘ7ܦ7vK~?yhlŷoUix@@@@@@"AOOo,KjK mF6<m@VWR,rCFI@@@@@@(  z↟5uұg 7F6s,ۻK ,"XbĊA11 +j( Q41AK$Xr X]#X,Q,y wv<;3;sw 3n6岩u+ Fxd}T @ @ @aM_ggBm7vf ևs8~ǺNm,V^zfG-^٭l@ @ @ @д 2{ռFhukVy{䯲چ @ @ @lWg2W9Ӭy{_rɖg|،ͶۡY>V gB>"ٖ @ @ @* hm̑6QLmSG$'Jt}9Oӣ'֢ot?]<2A2}"@ @ @ @` h_E|⛪6K_+i 6U\]ku GfNͶrԏ$O'^l{E @ @ @д ,zɡ msM6KyrMyy삜'${%'Lxe @ @ @N[)gp<72I>3ߚf͓} w^m^`k4[4Hng=c7&vA @ @ @ 64/~^qWmsGo~<ݬwg'7Kc&"@ @ @ @` hox!d‡{%3z5On윜( @ @ @ր4wݺuoHiIOag%va',iGy0Xlfmj @ @ @[X`8ҰE/<,˦lS:1Qfұ @ @ @X;q>q9jjc_Yci緖Y?E @ @ @`rc  @ @ @ @`Z`YsК_>Oml̘-\`qVM}ȊI1- @ @aiگ eoH4WB> X@I @aC$?_O @jhگc M arߦO @ @ @ߴ_V;%@ @ ; Y @X; @ @[ZC @`[k}=Z|U<' @ @mR{X&_Y'E @`_G @ @ @ @% h/σ  @ @ @ @4o @ @ @ @`IK` @ @ @ xMy$ @ @ @X< @ @ @,^@~vI |6L[]Y~&cWtzNvw_xܯ3~2l|B0c>3Ƈia|rϵ @_eYIDAT @ @v"0HNNi @`U3G9dL?65>,<3H~9Lߕiɥ>NvΖc~Rw}>sմ\ e?ϔ @ @N4i *eTYvI-7۴o}td??$h ] @ @ @д_ @`^Vq~,<'rrd6|'9*yirϤM%}->3m,W¿Ylr|҆o6[#Ow'bүڿmrɷVӦr돓O%&G$jdկ~oO'8`_czNnycWJsԊnf3=1yzrd\/«6_\5s|Xv%Mޖ$EG%wM*E @% =d/Uf1Vؽ~z @ XY~ZIͽGһ3iCӾcJ>\7i}\o³'/Qezv;ŖzQ:xA @`zCl伟)9+JV],ONNײzY[;‡dfL?7 nfڻ?8擓6/& "3L{{.:kG$7O4 .r}􂇣$HvI @ @ @`Yz"@hӾf/HZImFܻVf~2,c[6χk[ i:i?~ck]d|!9w46O69OG7:mþs>8}}2yoägf @@ߟmҋ<91=i]995ytF Km'Mz!s];{{Mi*ɛ^xdd\غpɥE @H`ѼYX~):G?)'M5wHޘy#flcf\}e"0i:v`ZW0ğW- M5v)_̓-90䨤 @[Z<'I5Nr`9~d^|5l:}vɾ/&}u|6{a[}Tr<09>iU'}E @D`G EFH2TN}Rj3wC~\391/Qm6PfɵѴAPwe' ?;M:QR>7#:=09. Yj";I!S䖉"@ @J }P/N_lYɣO|f]?˖m?#'b'KcM,7yBrp 1YdÒcrHr`Y @)wOX$@ };nPm%9?v ~ ;#^6{~ΰ媗eG/Jz7'ID2]_ tjp0Z60Im_!yvJ~}~ݦ咧&=~Kɓ.gM6)is8 _1yfrl>s[LJt~@tvMǸZ d&W!~[}.}Bj5$@ @~?q.>eܴ%6{ϒ0Ohh6?Y_d @ @++~ 5c㧶ڮ$fچpWxfg+2 5ώa=~ݒuB柕W?Z߯Fҟ qMڜ$woN3jm|o {C#^ 9yk:=yJrZRMz3c?qE @D`Et  @ @X6dݺeuIf?̟NJkN~arxfgקpdziOvKl @luTׯ?k[[Yck` ?ֶw`A @.!FWaߕmfh5=7;հ;쿕h" @iжLy$M9-blaխ-# @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @V"'iɏIENDB`concread-0.4.6/static/cow_arc_6.png000064400000000000000000002701141046102023000152550ustar 00000000000000PNG  IHDR sRGBeXIfMM*V^(ifHH 3 pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATx%U7rf$9!tUw5Ąk¸fa5uMQל,".]8ʐ%I( @ @ @ Cw[֣ z{`:KpƏ}qױyʮJ:I:~?1fZ5h_'N( @ @ @XLʏOūԫ O'wH:~YRO_ݓNY7+59__c79 W';kΗvϒHՇds^}2~Ĥ]˒k_'MڥaRm.NTN8/cZ^]&svmMc{VB @ @ @=&zҾ&OMN?&krI=}irNKRIM0$ %5q)Gg7&'uLd~9IWoxsR%Ue}aǷ>YVT 듭*HV$g%HNjr7ѓ,RҏMC%w%5q~u<{UR}LA^琞I}I]gˡ  @ @ @@M{MFwRj}ZR尤&`\nYo߮ z{Z|-9|vYvխ3^W$v?&k^)d&ҫu*MxI'5_I~#Se_$XGfΛh9a-˩iqu+R1w&'sʽ^֗ :*uoL8B @ @ @mk$vMVB]S*IMz9 :S}hmJ[ۥpxEgҾ~gSK:_$S7K5 }&￟yd{'ݥ&''Tq{ݏ SwMg~24Fy]jj<l32D=0Q @ @ @ [˨k}1fG$&{$5۞rr,)%5Q\_I䚴آY>,*vw*j\U4XS2)[$'&yNEkYck۶vjwoWvݒ&+q_ԓR_¨7T{Xe2Vns<W޲B @ @ @$In>6 {IMk IMl<=^rv&t^ߙ8{*>eMdEuYYyCR"C6W9ol7W-ʩq]R?"|!Jݷ&'֏KNk_vg[܃ξ㴶{>wY/e|6Ir|,K @ @ @ @ z}gҹ~( z~eyR5{RNMj¾{S*O[fg=9K!θjyHƻo&>[R&k~ɒZ_?ɫz|M?1)ת+nqNR~aIٗI=)_e*xyj&ˣQRe*N;Zl2Ag6Ook @ @ @ J7D O&`+LtKR&<|,Ԥq=^u\%YvXUwe񺝲c'Nݧ_L6K39=}3e;gYISIUtdeI]Ov?~L.O:UNLũ':mt:S:rcvO @ @ @LC`q{z mZvJMo_ne=~{!YO:YݒZ6ڮ]uIg¿Zj^IWg^ϰI @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ,\`?.Zd ;rʳC.]z p @ @ @-δp¼XhuC 5ys  @ @ @0i?Tc8:o/q  @ @ @ 0&g @ @ @ @jVY!0=`W/Yv @ @ @YOڏq @ @ @u~ @ @ @FV:'@ @ @ @Q0i?wP  @ @ @ @`dLڏq @ @ @uF}O @LY`GA< oe xNrVrC @ @#@ @}4-}(0e?qɮI|#+?l soЗg瘳''˒'&,["+٨ @ @` _  @ @`>%M|2yHŤ&u^RoXg'I$OޚT֧~^O[ @ 0:)C44 @ 6|+yVKcY)r(Ӈ&*_K ?I^|$Y( @ @`9%yP);d伤9yGyBW"xRߙ~| vv'\.鴛՞e/β3a9gIJcv/Kk6IlN.O Oߤ߯O  @@ @Q&|)9* l+W6j2L?.Lr=Ǘ IGISbG)}D&u/I ݲQU^ߪ$$ǎ/e;_M\2%f#+|]7&$Mi'  @X%` @3Ě(ʊgIM7'xiY.M,OI'jUv>,cɷ*5yr&5}̤z}Avԗ :}NòqR_D*H>Ԅ|}UIJCo'U5߱;%L욼8/1<#$9)92{ @ @U&} @ @}XO3}`wR,Oۓ?\U;d}dqR*o *Jdx)ge뒚?:Lgu՛&{~Q_O|0+Oj!IMדINO:YI[]J}3a_'$JJaZ_hW>() ]  @,$P/+ @X5yYrI>͓&5i_OTIva|&O<*?ZGVu;]+N_˲g~MNvkuC*jmI}[$5zbSj"NSOWqTȩxilƟk @ @Xgaۨ  @ @xzFzQ_%ݻMYO\%W$/^ܡG*[-n67Zs&oL4ʾ 7$%%wN6MtJ}ʖc$[]{&t&z1dQ @ @0ic@ @/LJì ~ǡj&5~riRٛ% 묮*+;+Y"%i ?#yLa)J~ork2 IM쟕,O^Tݵ=N/j}Ne$_NKR~>~PMw[C' @ @Uq @ @I7z|qROtrI}#TJWy%{$'Uu;Z[OUͤkR{'/)IHjdm_J|8Wׄkg&UW%U ?Hj &wLjdmg9dέ/8ycre @ @U&} @ @9ymI!+5y^OT;H4|{RO?:y\RU,kd&;_߬?? Y'_|:'\TdQIMwJ=^xū|,˦&H>Ԥɋ*T~}Jg\+6WYn]ƳAr^rLD!@ @^B`\`ɒ%#˒=-gYڸd @O=:9;9==]4Md=C @ @ @ <>/ׇfxGݜN:7JSOmjf]wڷO^޲e˚g=Y[o=k~|3nÉ @ @ @ @`vL'.KV>wk: @ @ @ @/'/6}Vy|5z54G @ @ @FJAm{¾֟=uӳɀY @ @ @ 0wMړ?:]{ÀY @ @ @ 0M҃g_{~E=k$'hx U @ @ @ @xIz]U'AlnFwz\辩Ek} @ @ @ @H|<$t^}Τ:\s/mHn?z  @ @ @ 0rI;<99ꟖfF.9~^rQWl+ @ @ @ @`6IOړySb%Yn]t0 )"@ @ @ @' M4ީ_cΛпl @ @ @ @`(^t&'[0u{\O?99/_}[b\ @ @ @E}i3eF}č%NH֟'n @ @ @ 0?NړSY?' gݴd*hG @ @ @:> U>+3y9W@[6'fm5sB @ @ @Lvri,l29:;y @ @ @ WJkSc/'olR2]c7I{  @ @ @ @1]ʤsܾsEkzmTy9K @ @ @ @ |=gm2ZIMSIקZ"@ @ @ @''o+H_~+ @ @ @ 0;M_} 6IM&ϺD @ @ @xHjzM~/Ok=\;&4uzC @ @ @xE'mks%? @ Gν6R @ @`L"^Qzͥ]{>k7ZW"@,, ~- @ 0K56Kr9{=ןcuyzAn%@It  @ @ @Ko @ @` ,y:9%ˎځ|mtlǜ  @H.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O̷ͧ,Yv|l.@ @ @ @@O5gy̓_js'j^W?/jӫu~ VX|\=a<u6 @ @ @s'I_ʙ>4O֟QG~_qL+mgDdzgkԵ^vN^NsY=a_p G?ѫ:M  @ @ @ 0{tҝӝ]>dɒvp{e뮻I:ۣq. ̶J7 VN;m<}6U @ @ @ 0U @ @ @swK/١ ˲]骷9%-\5gݺm @ @'pɟyy[kʾ{6S6K溮.IMshk[6;4q-m4ɗ7Տ4޾iZ/Sɱ/c +&>Ǟ'7]z}weͅYfgeLg4oąǫz͝fg,xnfR4;  @*`~=0Lޛuײ}]3|^NV~^A dˣ{r&gw]pl׻Uo @\`uMM7G^4}5GI!/h~_}KӼ7^w?h۷itr< wD2Q_8OM86vhtkd?4_uxl>|1wi{.WfՄ7Ny7l3O}.{l_K{~cKvi6X?WOf;?ysvXϾy vGsU^wظ;k>Ki3T @'[xZ7 zף$OSM*ߜR+,uN˰%q/7m{Tվ7v6,  @ @{ ƞ?7NW?7L;5Iߙ}w#QLl]?k>5ߧI*2حWMNՕ&-jrVǹŦ6 /9k{e"bLw&>fUT-<|MG]_]5ݦO @/&¸)MSϞzuwM&'Mjq 93О2䜵䝄s$Ww\?Mz'zD!@ @ ~_tv{XWkq OoMozLmmS=uSGYL.K~5U3orV on-Z]W+*T~S^c.;srni[o c @/`ͼ3ru|$yWS_Nݽ&*KK[,2lfGf< Wշ&Z0oR5_In @ @=j{+.Z{J=ڦߧ߼^ry1.hͶnu;S(_mX:iu}ylNv( ,V\r5qѥcu~~㻏+r[ MPQoK{ͦV @f*`~rc׀ߓtku[%5yޤ͒$?\ k֏:g!a4뱇n>&/%rǬeOɩ:}ln^?G\/ѹ#x:Vi0lfyTyp֤{"$}w9"Oa @!>Ms4ޏl?<;˚M7Z4Ͻy.l]ns6}Z @0i?sZ N5y{`r +iWZ.PԱʘ0ջ^ԓG%}3۽Jm&k: @X=>{n|7Ey?4?3k&_'f4͝2sCפqcY?}X^"wvɧ/O#%:c[|u4/{yk:x6hQGnj3w͒.m9f|t}Ǎ7?gf>fͧ3O\pk'dڪr[oؼl6p&:$@ 0 ty3LqtqU˧x|{W|Z›rB=fyaS]xSqߟ#@ @>,Yde%{Ǟt'*56E~~L^6n|I^m[3CX,;&g|zh˯\\W͢E)_92s'ث3λ.Oli>sC\9 7\~n @}οmxkh/Shr?&s5a_ ԗLYcX#@ @Fk'f禹95효n.*|u[u]v n}=v`'5 @a0i?;S_}O2ֱH-ksS~˞2ɥk_(0 fIyB @ @ @0 xrNicL4ġGOd&9me}S慎;L @ @ @ 0@2iz:~gp^?OXˏѭQ)k t ɦkvyҚ--a' @ @ @*`~z#k3*YOљ94s\)k Y7$?7$ @ @ @.`~wˡϞ79z}=.٫a Ofߟ\2Ë;;sF @ @ @@c[NtώWCdz) YƳ~cr`m2r]S91 @ @ @ NS=8 ֓Lnɯa)צ#jNS&f'HvH797Y[Y_A @ @ @ 0Xk"yVn=7$HX{UmK^:lN7WK ;ӓ^eTu @ @ @̞ݺ=!LdzuZW.0jfeEY֗I$RoU @ @ @%kȿ|6BrA2Q}~+& ;eIMחQ @ @ @fYOM\<#c#|1pVHnIZ_!@ @ @ @`6i?tѢEG%;~+WtxAf-)2"T0f-  @ @ @X`A]LؗO}3(N:S_2UHf K @ @ @s/0I¾}{295clg61hv1ٸF{L^o  @ @ @K``5L!@ @ @ @'0ߴ=%ˎ>2>?3fp @ @ @ٻ2 @ @ @,p `S@IDAT @ @ @ 0w&ޕ  @ @ @ @` _'@ @ @ @0i?wL @ @ @ \> @ @ @̝Iwe @ @ @X&  @ @ @ @`Lϝ+ @ @ @ @0i?O @ 0O9Gv:N @` _'@ @5[%{_?j_ @" @ @ ,Yv/ ]` ^uU͕W^l1<: @C.I!AG @@Mp lrV~1i;L5|;u?E+V1 @DE @ʕ+Zh3=Dg?٦^[6tP2}oO~,^9^cκםNus4ok7w=qBR/~lfv6c  @BBL @ \ kO~U5K/9餓nЏ ,_|Մ}u%4_lڨ{\o ЭĤׄeerIYi~ /pՄ}k|k`9 @''  @ @,]4@~)Kӟ+:CGN8fɒ%d#y'_ooclקdIrA g4e d_ܼ{V_\  @~ xҾߢ#@ @ ҥY4Ft]}swՍK?&/HzM1wNR9i=]>?ꪳI @/&¨ @ @ 쟞=wy]uüYW:]_;eIޒ2ɩ`r`D@e% @苀I0j @ 05/Rĝ:-^ߵ_wWݨnd=:Sk_÷K'/L^)!pqܮUg @LϘΉ @ @xHz޽H_7&z~e,POa) @ @ @ͷbЙjYީ ,p۴wU} @ @ @}x^kO~}(5A:/IKgKwdIvtB`/ڟZ?`> @ @ @3>]'YS;f˺<6'g8{)03X/'tǼ筽|ɼ @XgJ @ @A 5u{$NQ9dt9oca(I'k+Sokk~mfGIMOV3 @ @ @ 'WScOםf6΅_\tuT{[=簯.=&CM|Mշv?fCٺ,xE c @ @ @`Ήɷf}6mӿɖK|QhtROB+fCȩIٸk @ @ @fP&찔#ґdT9;f{Z˓r/Q̖ǒ( @ @ @LS`Nri7ßƧ:Q}܇ٱ!n{ 6;hd @ @ @#P]4=Vݿ=1?]yO 0`D~bD @,^5 @ @`~C-ZtTRO+C"rʳC.]z\KIɶ0q\?_hZУ 3c @XoD @ @ (J.[ed&ӒS[0$^|f|Jyxp޵;W @LLD9 @ @M'InRgc8Zf&Y=1Y PG'nX(f;;:k+=%@L$ @ @f,;RWr  @ @ @` ,6eGvV8>9*I$.8'tV-  @#'`~n @ @-pF8fg9 @. @ @ @ @@O=YT @ @ @ @ + @ @ @ @&{$@ @ @ @0i?xcW @ @ @ @=LdQI\8nR(.w+[b)Z=@w?f2$t߹3)myA@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@l,sZaק&Xf>8:ovH;ϸ%  (P+/kB@@@@2M`+(H.)zGۼEv.MbƐ9m]}ќ!  P+]#    .le>4qEC; shD\? Ep     ^}δ>m'-u;ֱ7vk;7?cϼ;Ӟyz}lf<LloѸر]uRkY3>f= 6v׋?Z5츽yG^f7>3վf]nkuc9m+Tܻ'ا?̷|wgyo޸u1N6M+P@@9s     n^bZkԱU5/_ςl/ ;io.hgԶ:&؀/6Էsݗ?3pv^h}b;*Лo3W'?>:?vclWsmQ{4)39wNX=,ub;nfM}ozsYmW+1jql=_  o}A.@@@@H+Ole"v*稗{KG}Lw?7e ujW&Xf?_d>ovhn?H^>6;.:}K_4O]f771pw[ֺYMpgGkܠfl~ Fٷ/M'={_.eTtMuF@rI}.}\+    ~AkڨW3i6blP"VZ6i>՚h9cvlyއ޼xX8|"}zvW}V׷ { &لi ! .@>׿\?    @nO5ضv|6Z3k_}s /,:~ϝQ,{K }ywnayQTϏ/Hs鲂؍O3VoU]ۨx@ sˉK"@@@@@975^7XLGOoE,d[K}:zZ5mㆯtq}콂|yAlv]cݻ]uRkkP ME@rJ"   Anv{g_sKqc~>T}e;7sc]sk=xل?SgǞR&0[Cq~\7֡C--Rm6h+;~-XT`W+Xk:a7bQ&Xf]7>&>71gsb+Ole7ٖ}9X@rC9s   drL;zϦ6vRU]l{>}0}핏fYMNheCAc̝~``rMՓdzmv:Z-c@5s    P˖]$R#G#9̙bvf[M7 /5;Ȍgo~)*թ+&v5  Cxɳ=ۈqKlзlaoum_Ǟ]8]X6m5#Ә5K}nl6r-ӦW=29rdJ,,@@hc8   @ ͦcFf{cIj͞?f SwU^LIo    U'0yu ^?G0G6G` ^o٬|zKf/·/L|;/ֽTׯG8[Y5<~|'f~^6[gG]O35#4n^}t6}wW~> @K/li݌c~^ /w]/*دxVMkYvuVٽPz痰Y>o  @ Ӿr};   wwq|/W5[[O)QaY[9l7f;`vsuSgcz~wmgw1l 7Levjs.lfkn_l5u@&+4ŧp⟃ 5+s   P"   @u{4ԐлJ3;h_/|6yg10gi=+f`.ͯn|qmا}ouìf:sË >Nf)Z& dwPfǨn>4 @@U}sp@@@(Uy{ [_*m种.Zuˢi~/;-KY|ʇ߇*{ <f|ͦ}3i+߹eK^.߭?avGfg  @ ,0@@@N`/i~aOF-kآh/=Ѭq+=2ۨhSy+bE -ж6$42Of\ sf]zÙzF`_DN@@ (ҧ͵"   i^gA BKU4?v&_fiq97!QfN0yw F2K s_>fiv#ffg        @ 7i_:N7ٛ}3'je^`6I{1;JC/7xW^%#m߹OR_7 yƆfkn]|sY%QI5NjU  $@>>M@@@lhlݝ͞;Ork^?QyoW_1fߩ7W]]l|?=ƍ}-]2왳9>n&f'e5jO l]V| Y ۾s>͖O&hQ,_?:1H]? dEL8G@@@rQࢷfOuF i-Zom.lB.iU\?19QzvBf6e2]AbkϜ~N~ fɋ KW>k{lޥ};q6a@C@(=     @zXA[:Z6c'N+zgmĥvzqpx4;Dm#bG:ynd/e#-g{6i[ݛA~̲ˎoW@@$i     $hq<|]{?lnvܤJE & kډM?sn=fqؼ5;Աkجy>~5lyLڵeAo`[^@@}F    ibөm}prukװu;%~oZVV6]V7%Oiݬk>;yio=oŚi  (7mU#*3    @ş]=RO.ձ{=fI0uxEӚ6wP>6_bxA|{Md5LX %P_iC^8ʓJ[}}5-|/|~,\Eon\Zk7Wf+)C eC%rT;mPgߣyOʾ @@@@)=5a~7ϖ//*; 6]9틟ۛϵZ5X_l]z:ӡM-􀙡5D@HF^wvDӇ*)k)(|c5ŏ@/W3'JІk}Tǫ7uJ_+zLWT>VQF)j|x_r7 DC@@@@r~P ivm=qفluLz4~̲h* 7̱ډݛY%Is{Yvbqv-~s-g-2s  e YּY}MV-AoxWMEB˽y&6*.%MQ o %9Gm@C@@@@<-ɷԻZ&nڶeoZ.h]: YNJ F}jx5Kwl<ZS[]W'  @KWzc={{PR_Tx]T:)E~2R潯SW)J\YWˏ/()͕/{h~t/2?mwM\lUF(W1VzKj>,?hI ܨ F2籫w-pJ`^V-um W\    EjمG ji,3/絵6Ԇ"]bϮh[B?zϦ扶YآM׮ols/ bb+0  @׌]QX ^$RWK{^ 3_2xov/ ,SW6VJp^V_hq+M~"oؠ!      9'PZa_it 1*>͋޼W/{q :_lTk2"H>Ch_/+kr7Q ƄJ(,mCo] #ӽ4Y9UYvQmf|oz+):~GqB>,4      @iE()^ ޼=LyazJOSeEoS[P8Ň+5]{z0נxWhy09N(>FmrZ_Zg"kG%8ZN nndgvG|U^ॾ&| x/8YIfzVW)Y@@@@@@F/ O^0El/,wQ'3`L ?[1U5\Ouʛ_( M([(A1J`^T G'}电7Ae<7$ 5 񗓕J%Vg>t&B@@@@@@@\^mӴ{rßR+*++S/P>Qx|S{o| QV*-j?)}%k +* rwUVyϕ۔y@q7JiFrV+wyxDdNw򔲾u}4Wje}֥m7s) yE@@@@@@rM}~) ;D))^~VrϨ]ܠx /zWUVU'"ŋ^$/\v]7_^l>e:U(-xTRx/{FYOuyU:IaxU>U+6Jol>}ͷ]PYxK`NZ ߰rDW@@@@@@@ J*{O쒚4[Ml4S(^ 7/?W|xה!^/QiCR^g{{9%և?RtTUh5UYC &)޼W}C%]+2U$&kFyxnɞuڨYss Z2NnYuWM$ @@@@@@iA<n^,6/~.yO 77Wpb߈PL <§Pg^i@@@@@@@Rzz)       EUc[@@@@@@@VA*)       "gگy-   d@gp) w.>>~=6/C@4zW-;F@@@@@@@tzڗû    @AA%?{g .=6|VÇΡGZkU٤b?鶏sv%O|Nj͙3Ƕv[y睭F; K6>C[tu6`̗C`oN;vmWֹ,7@@ :.E@@@P䌒$_}ϦOn ,ڒ%K ^+fBLɐ-i+׎yL}6e[h 2^|X5ʗ߹׺ҟ( 1@IDAT{6x`߿͜9͛g%9p! _=vCD]w0 @@@@@ 4+WYy/[<'})߹E=Za}$rue5ZЦQﴬ}wɖ      @v -,ײ3ңƧʩe@|֓%z,,V2v㕨ߤtԪmc[ %mck@@@@@@`JT\{X+='|NmiYJwt=S/v}hM4=@3 =c@ (g"#~#DC@2 J`OvP( @,u,\W6,Ϥ5 !Z_g+')PMoMy*r|T#8" @z >>2S`ûu=zܦM3Rt֍g϶Z#Fh)ҟOܞ@H$DLۖ)W⽞NJ9C{2n9iݟmOU@Sr~, dwOMC@\\Fr qri-h  Y#p TтT-K`/9.Eky%:M\ˇa)G``~>ʌȹ3 @98@ @ݴuOs/ܻ  P5Ç? &NDnCuJC\oSK9O 4CzОIvVvU ~] -=>im $?*~ @H[ipb   ?r@e<3"1dPq{mM ?~O7$9bhn+)7f@@ ("@5 \uΑt9sT@@T zҵ#;^#˙E~Y)? oHBF>ժ@*EgW +;E@@@ J`?YP( I T>]O((#1  @Pϙ E@@@ )]kJ^{X PiZyoW˼6{,B@@ (gG"   @3r)sG3$+\+^,l9D3  @ P D@@@ ) JK4SYyY@"/i핑k3VG;_ٴ2¾@@HV}R   @ KFY3r5hd9 sGvTWO((^\Ou[O;c"?*'(4\`#o( $4@@(W5B@@@ jnU+ #gJC*C`v{ܞ`gieUtё;ec@N)%3yf; TE*a    @ 1-ӞT@vyA}~@^4Pji{WF 3u4?D!@@(W1G@@@@ 6 y/½"'X+՟eOCJUh[W~H hٵ ('#]GO+*>2 @4Fˎ@@@HKctV>}ٍ.JhYY@~Ӟ9> Hyԟ3> Qzߟ[cd=A T"89<7ݔ2N6O@@PO #;A@@@ ʋJ~y~"U-0K<@Y)41bbM}06Z3^"i$߷kJ4)2 J}$   +J6X"V={(o0T?jpenּ?~`&|}Iɟen47(Ah$_oc"+t7TBC@R*@> H@:׷7,;w.u[l~ƀ6n¤'X|~ScƻٞdAUY  봽@{i{" Nb[ɴռ?'|}Unރ2= o|yV,g@X% @ {炒[~P? [Ybs ׎>ߙ%SϹҮkҸu6vD;E   d :Q/8tX{/վ"&w{w#'V[*}zi×Lxފi̦G>JoxI@@UhʄH߿y̤߮OO_z][{_b~ix칤Ê  i'PKgthC #0[gzG)s=~'eP 5޽^4XuPUE;Z_("˙E@-@Ѿdl{e?xvٗN:2I_4r`Ml6N @@^#gzLM@tb?X9Q.?ڞւ7G>>J MpU`l h @ ~73~k:ulV-[vKxq=~ߍFxٲe6֩+1  1[LDx} hOȪ"i-·?NWNS'ȿ{E6FWZF3 $-@\`5Vjҥ ϼG ~]툃'\o.6r8 ż" d:G!Q~,gTt^<}N^P%! W}?Ўj{D|Z$  EX>nZ6_}Ut) {C{Ċ! T@mGX"@ UQz+;ݏ߰ij߄@K<Ϳ[}$pGK.W'xE @6"nP~TgvfA9JhR{* @@ 퀍@2[_툓γVnW^tH9{@mt^h1% {!@  R~\?c|rqd9JWueNv~@@ 7(U#1)Sy`;n}6E[n  d@OJԼ?krf@l Vj̝?_A @@HOO@R,РA͛N9cm{ivh-c@@ WLp&{H@\g\ܨ;,y?_  @POAJQ#{j+LյӶ6pg$*az_dkh @@jXMGޓ>nӂ#9(pٟs뛇cMU'щ  @ZPO@ <[=v6荧y[CC@H{u)^9Ey5Z% yAJE&w4zjr?ŋ-. mQh~_'  .@Ѿ??#O @@@8KC׎\0ϿE3wZ$VM4Il*DEE7l_\毓7}͇?ڇgw{Qdo!Pk +*C  H<28    P^VA'Gh{nPB^Tv4+9sn26 &V <˔o,R(?(~#ާuGl@@ (\K,((Wm/Yo~*Μ9Ӟ|I뮻oB @@@ :>ls#%ki7 4r2+[M l&}rfw%ϖX5˝#O+e2{6WIpÀɟݦ)> UZ\Mx1?NՂ /Ed@O}}|E*^>Ȇ f ,U>>c7n͝;}=ztrOu{C6@@@*&p6lD r;i_\\Qrk.74_׷!PQ\{+-A$h> J`a裉/|";Gsf\ hbۍJ[!֍43A^>ʎ7I)~  @V Uf:P%ng;%ڞ8F )7$Jg4=iTT`m)?^Q~[%h4#Q:4@Rzg[兹pͿs^)KhjʟIC@@Y{PCVPZB^cYe(<LT%/LF >be{[\D;e]%\׬l4*(J<}cͿ| Z;! @ P϶48)l}}U­fNm   ĺ%89^~A{̦No2fh wns!˴M佑!,{tZ@h~2*)N5h T_Gh"yE@ /f>/HW+4w Ϯ@@@T tήJrs[=&ؽғ8,d@*Q&SxaNZx<ߐ#Xq!~hdBC@,hl%^Tߢ^|M]/2S5Q[M KcY  RQ.yއy?e}S2$VA &7 dIb7U"J~CR/Ħ)_eY{IJr]VC4J27)mg +@>s?:s?w!nhgxoPFzqNԼ}4/* yE@@PIN5J^ĢU?zX)]EY꼅@gq\WJ2$wJu[&Z/:dܕJtb+ft<  %䃬TH\髴Q{wrbzي W{/i   @U M^ *irՓ[zWf@}KyJ٬5-YѺ{cZvҰ8Un]Y,mj&2! @x@y6+\ '))/*XJ,Vq>ھ{韮 $VA@@ ܣ=MmoƣU@I,j,BB}lhYj:+xF8"=+xb}Ǖ)Ŗ&?SɊ@yfJ?eNy6,\TFG>n@E}|qϕ&*? $q0{} $PPPpI   ރGKyo7EeBwtC ⾌@e dwnM\UN /?xWVS}y]y_IZVIjۇ+p IsUQ~T*wmY)CK[HBG_~k}v|x@@ N^,Oi+‡\u+g=Hx um):4*Cv#n< @@ G_˚W6V۴ 8U!P@0;>s{.y̵"i$_?c,zWG e:U ܽ?f+,nkψ[K|Ӕ h(jY! FG2vBzel~78 @@H] $ZN*L"~SҾJBJt]e@K]9uY\  '/r3߯ Mb㯵?_Py'eU C@@z=//q +4(((ѧORv|RFK)';KB\HRs)dgIdw.g@JQIe'iU ϔђ2v  l& lVέ_emwdoI|Jaye ,Y[@6Jf#TE7Ά# r!ܤɟTmP>_IP/@@2[WgM^W6|;|lZ\U'<>X@ 99Z]ФQZ\ WbDŇ/&pL@@@@@@(&P3$(=XHY/}PgZl6ԼE3  @V v]V^W\Tgʩ<ΥB;;W!66ZsǦ܉X!+6Js@> V Jw%] E|- Y/i@@@@@@@(Wxf/\K8TZϐ˙Mr@@@@@@@*h_%=LWV^?ޟqϼ\W|       @ Pb <\Ks5e/%[?H}&;      Y @> >J˕#y|:?އͧ!      &Pڎ́3E L9rL{c;VE@@@@@@R&@>e^zu{\3Sӧy!U.F l7P+E @@@@@H{O([ N7T6/f-S 0Lȏd*O*mn^s%:yo[ʶ~vx LiPůcr@ MzDȔ<@@@@@@]ʷBwwZ;멊cT%a]OUڥ(/" M4rR)%:~AUuuהP(\#0jF/~4@@@@@rYsy tEj{𝕶d[s*}nC^J!     i)@>-?Iv]]S=ѷoʥR}US^RQ|HO^7)S_+g)#o;FeU[;``aO^Vʺ/{~r>VvrDV&cm~j%E?9&8x-iK?GG-VnR9ʣJ@Iff[*)[)RY@@@@@@ pBCzn.(e{/ P򱲆͇|Bv0ܾ E=k_ne򊲪mv7 E͕da]o.kJ#ŏ3J6U=6M tmo6Do=9%|\ڛ1gJopSo|e[9P:h=axBC@@@@@c"!Z}8tচCg./{Q3P?>LWypm} yʳ//Q3;)/(w*;*^v2\9O)5QWx͋^$Xy\IGQ+Z…Y2 q}?*~ '5qʜ koo Q!   Y-`Q-/ jfurq $#@>%AVM{E]M{g.vi@ŋ^ _,U6TVJ/4]Cy@񂽷v%hmc Wߗ;(^xX5zޛ>(Cy%~߱Ҏ7@V)(-zc?iJ2,izzχA h   *p=퇿ڗvMy,\og::m?Х}?,i}e  d@i" N[Dð{ŋ//ZDyQً+^"0͋F~Q4YTR֘>UQ.POzhAk$[g74/?|YӋשjkGS]u zx~qd~CZ/-G+iևgQ)Z\6K[TfnA @@@H[gߛi]4;fuX{s };1qme}qbqG> G\=~ɟ:Xr{6zRV%; d6h #*a()*^d|{o{~M=yk[e셊GJ2V_r/6(~%V5hŋOL>|ܫ7SWQQ~Wi W ⟃7.}l*5?nnSRXm˔y @@@HK;^f1vڴE#yv}b=ڹP<ޚ4aĚ4iU?F ulʌe6mrܮުUԟnw.--8)@R)PSy@~)>CE/B?xE>/~?Q.P-z=TyZE"ŋ+BEIԾB/{!Pڽ|RZ+<۽A<^YװgtR'Z/Dk F)wE]۵Q+߇˱2ZJ6AZ@(t%,5k5 ^@@@@tB[̋uDݱ\[E#hjZ5i?G/>[qY5zoѓtUk0 /@>?c2O{{h =GDV|iP&;}xzژ1E@@@R)0v0ߴ!;m>|9fYn5M!F[l:N޿͜vgG]Zvy{̃!zvMD9ay'ߙKxoomdb@xr   @  M;w{{!;y'jć%6uWjiY/PCϤ.Zͮyl=^K"ݵup7ALm̿_\$ ;jϦ+vW{jp ?&@K}v}\    >Oh8gԻnef;Wtοv[t/fKfvhou:}\=Q/{'>ٴٖ&ۨ~ܲ%!ww9(ػFz޿sW9/Џf[ϢNf_7/nyٔT?ljhVf kķ첅1x@U}z>%,=\Lg@U|8<+ m;"ܘ\ 9N7vI&R^sTv<ԮN}@QfuhS{Eޗ7_|.     )0mU=C{>?aͮ6޳~c|X NoڭeĩzF*TQn a^ι`0^ҟ &=›W+ Twn2GO4t7߾>@@VQ*9   @}i\È{ExAU]̶;Zoc6m  uX@C<g=#տ}lWf'O5m[3gL;`vq51_?TO3U.@U}a f{L6ouIO@@2(ڗ  wQ[O i{MĆ v^6^{/إ AP#*CI 3M6Klggf~<;3 A k&WNgXU>{Pd+ ]^vVQm @0D&n+Gt{J kۅua 8mE~, TvW9kZosc,Ϝ/ҥȝ{@@G}ct@@@4;g >}@dD+lH"W?U*NA$F t5?pcJLS%UY=Od{v|Jdkk~}]"8>iԩs! ,@dL   @Dڝ-2Q JػX\3Hu6ҩz'\Ɓ֧X&``s;7,RI"+bݷkx kKoD^ەIv^ڠ*vy k\Sog{TW_36Pē&[]pA |Eb ?@@@[ *𐁟^f6d9zm[F=J">y&CD}Nu8̨P@@tPu,/zf^S(J}nHz;+?PKq9zg %2B}?"2v"2F:"w9W.#E ʜ@9o{{7   @Ih/!  L HfEO#Rq+X(T}rLWR`sq:-KLWϕ%3.+/]ȐrQҥmwuiR7yN˖'GҦq|Qs*CZSx]O*$Eu&˝tsOzgdLW2kIE. 4ڇq ^$O\    R`ƢrRR\ѭ̒U͋Bbtike՛2IArT&-ם,;d˒U!Sb ÀM.@f\uvK:c/~#s nu(@2$@}L6@@@@@ <6IqO6z5] o.=^ov; hK@IDAT.[#irkOegTr*ij;Wu,ܵZ~kx.Y|ֱ35|.5C eAې޽iii2uT%3S角!ӦMsb݁,c2KIIN͛'999Q0'Pu|@@@@@dϑg'Wo* kǤ6ݓ&T*#;Ss䯵݃3=IѲ+u._|dfsӜlڞ%wݓ @L0Jmub V6m$zrKߘ1cdŊl֬Y#{.,5m9yz'f{O?Xx\qm     !ظ-Y~Z/E_.ZZ7*6Mj{|qNyZOyeHlB) 5Ÿem뎲:t׺ѼKhɒ%k59vھSx5xiAMY/4f':^+Wʳ>{@+=     @*5:ݽX; ߤN4jw?ggs.wVԽΓmN$֠/GF<|ZQ{Hщ PVx &yCݐW~\Dgwi6tN hL5^E@@@@*` $;IH'7ʶxϕj4Aɖ _~Tiٰp#紞mƋ]5ce+;=G@2!@}xT\p&ձ<5ޣ_ntݳ؍>K&('מ,+.\be/rKe:ʡ` Px\f-MM/T^_-)5/j{mֽ*6E@@@@.зg8#?u=ߢA }8۵*O4Ey{30)!J\REu;ɐ7}aT֍ћkH& '@8Zhm ſleJ< @@@@@lLsuuj~kX|5W|ѲK|#;R j(_}vex94+4SDRcٓ-5S@(|ʅfa-I]&ޚ=OoV/1]>(f#wf]iϚ[5yD@@@@%`K#ҖJcR΅@o4}kPWyƻ_?Xs : @@@@@@@!o9Txo}Dρ~^ilH,bwk6 * 3^{Dy@@@@@@OF{ɫ:nW]# a'j(LK2M dK2Mq>ԑNxy&w)yG@@@@@@ h_zt6}c{Za֙yXs&U2Vgm\&K8NWk4+Os@6Jg{s6@ v>X+7>hW       @ h{e$$/0@8Ĩkt?fC)8յ4ԑgDA@@@@@@ hﻵ]꾱%55|&8La&+OpƮ C4       ⬛F_ k|9GU1X뚊^3K~{r@UGʇϾ9c$  @|H.6*|·nꚱϕ՚ b ߺ)k6kꦱ] Dg ۙ%)gDCK2٫Sļl=G9X u5i*U]1 @@@@@@.@}/Q/,c.;+MOm]Fx@Q>E +4Q{뼻kR?h5gu6,G@@nn-AMYݮ#TQtY]GNu,uSVyT<]V리nQSa%#4 4ήx:chܠP+`6rhp4۬;\c5}4_khSl U  PJ=؏D V'V7ug>*(rda`ˬ`+(\٘5 ~xL{l~41p-芹ź)G8Z*WO3        MtZ{^s&Js4F9ynʑF Y|Ip       a-?];ϿIqaև_i GޡffsN)       @X \ke kkJʊYnrͻFME @@@@@@;x˰[үPB)@Y4?Kӥ       }}QQQoksC1~C:w]0 kwe@@@@@@@xJmwߦ7/3̂])@@@@@@(@ 3= a0exnS=XF< 2<)em{@@@@@@K `ᵙ        'U`-o|Uaf.      N3Cgϒ@@@@@@@"\F|@@@@@@@ h:{       4G#      NFٳd@@@@@@@>w6@@@@@@@ t4ڇΞ%#      D       >t,@@@@@@@ hG@@@@@@g       .@}l>       @h=KF@@@@@@p#|`@@@@@@@B'@}Y2       @ h;       :Cgϒ@@@@@@@"\F|@@@@@@@ h:{       4G#      NFٳd@@@@@@@>w6@@@@@@@ t4ڇΞ%#      D       >t,@@@@@@@ hG@@@@@@@l=x`-,3߫3|` @@@@@@B'g       .Fe6eN=Xf}Yem])@@@@@@f@6++i~h[f)c曗fL                                                                                                                                                                                                                                                                                                                                                                                                                                                      QGz   @}QQQokma~C:ԫG@ u~ubkD%\^'ԉ>"p ׺P'/ u~ubkD%\^'ԉ>:.L   @Q4ػTG fQQQ'R/IՉ⪗pyP's3#lxMUuO6ITE z{+&\ևzq u;ѳNhzaM@@@ LKTZW+\,e=^X`8x:'$#\#$p IoXzXhxzAAb. (bb.QQHG8CHp8SPEGP0b0        ?   P|ma*C|8<iub^/I82^HpX Q'李iubGkeC߇z==+?<$ ~<7^'3k%^'V'VL!JNL+^+EN8>^7        @ hAͦ"  @ )5^f?K;*+S|YGXw@@@ 8wf;˂|Rdոm~uT/`} lE.cOXhB@@@{Q;_])m釹YP'e>@@@>{>=X?bYWi~'qd.       ,@dL  _~qܬacg:q8eavdl_]-ߨ =ekٟ/_KSFY[g0ri:WWˢy{lUޜ,:GnpW$j<2}iyd y觻$+'Kn=n]>Yf…02~rU%oo_6LZIaθkH򧋴$jE@@|1`FUJWOg 27MVGjXQN!޷2_A+%'+WUOˢ77/}CZ%o 9rdZdݒX;N*6wMQN-uk1~+f?Ws'jոuߣjKSWb\_k5;[4555Vjh:]/TS31   !Y-#=^*#/'QQQ5Xz|v,޶t̓}Af^{URT 绾weIҵA7=e\ҢXjv\4m3ΞU RJr$&u>[?"k.uL ҥ唗9ӵWO*S_w ] Y/k%Z]MO(cS5 5h\XD:jVkS4i\XDiIn;lcgؙ6_k酪-ziJY8ӾL  J`%'7[ϰ|bvW73'=MvepFut͒ߟ`oO ?Jr奕=?2`ʍ iwF?N @@@$]Wo׿N؄hiu.M`oZYSE6[t~~ ? A6M-H={~ոm2?e:,Z$D~PCc V47;] LWڭu"i )ZjvXM Qc (4Y"  &)}Q˸[NzǚzzUumC:Fz" *5rq.u3|Zd̲Ğr.&$B+F@@|ػus U$lu OU`:/\ؤ|*6JrQbr6R[,hL]WŒ73R:"R6bqb)|EĪrml4i.\qO,2/@@@ %Tw Rn)d!xr%יjB5qGBٽdGUZ˿e˹M.oW׾ЄCoi,[ Vwm#cͨg$ֈz za@(@BuWC-v#n/r\jߞYh$'3WV[&Ʌ;?7\Ue;'=%OuEkb DuE!]:&rZjdf5_Jc@>@@05u q>HVN|tT~q:BϠ,c{/ݶXN\PU.hvҲj[Yw"mIn쬡ZǶٙrȖ (k:ٜQNINVh!Fޱ,H$|sMB]/v>_] pVr޲bߞHwf޳Bϲ on..{.lgNvX %sd#e &dًdVbd/4/Ԝ  ʕ?ܚ_"\x:;ފ-gtlYE$T-'{Vsǵ6IS =|zp/N{>vgqۡkώK54m4YS@@@ pBzgȈ߇KLtQ9,:קWD99w%!6Qoz2[5DWގ}塟Y$uݽʹd-qoe_Jzf\f?N¯M~{y8@VjLLǦ'+w-YBPɮc%@Mo3k&kHJ u893ciҿ{\f*Ӯ[$ UJyL@Qj6Y&7`t ܨ3Fs5Wkؽ)!?Iդe6{rI URWtvVq琮E,ɕe[$*F׏ol:<,EbcU%evF75ףDL_,Q/HUzbVV~"Yi&;ON;]FD:jji|)'gk$j. 4\:qķghihViXqݶ[F{C    rbgſ P<~K%>Faw;k,6<Ǝ /#:@F-~[]N#';]^9}w}dwi%i}N_ڀ]$ :qvn)WNz4:}@YgfdGvc?uޢe۾2qra˜:Ʌ{bOtHGܫ1IxVuewd2i7iҰ+s~rreofZ4ZeoK-Zi WigM>m5/kJ٨s@`}N}u唗KOX[GƟ8_Fu?uni:ף~fo+3p3MzV|y9˽+%&>Z]o-;`9Ȍ;WRaג7:}Ze9ݖ|l:;~ԛ ._G4pڛ-Vn~͗;&_tҼIи~jb XC@.fո}FoXXF{VK $QϺ,h#//Gw_|m_pԥ\L9wwD\7u^U%vzf>,k^oo =߆Ilt|qYe _Ε&j+jCç>)q1q^V虌핤:qr#V71uzW͊OS HcܢWלyBchFkT@]T~Fhv'D tpFW}FX>S4Tb?ڲ7Jc_GWki bq5/[445XDi/?͌   8g͵)mY/S|+ k(QbM DQbo 5ػ£^R3]<;,k_OqbuR,PdMVUli?v)rW=IjeYrʆԵrKҺZ;>~wbIr$9U/Y2O6oϳ^SO{Ɖ5ya|G_!&pi=v˹{Öl[t6=(tfpou9Tו:8{_4kֳ:yOs/gp)UoWf??d@]mtҾ6 /uuJ$ õf޵Bf?ZbX,|z̼{E8n-'>IŦ O>L>ސ7Xi}ӭѼffڻj׆V.wjCkL"ki.MQX'^y lg%}_͟|\)X]>i>J -]]l%'3G]#Y9}^q'K4nq :ɧ34iN^+!NyN_lt=yr2Ip@t>7jt(IڨkUnxP{>}:P =6 >Ɨrl;t5_&,ָZ'rRWaVmk3bv{65n)D3br@@@ϝzp92-ζ#e[s{؁]_5YCT@S*=%-V'%e tS-#~ht= F^{ˋqnqјӤFR-R1v/muY?Xn'V/-{Nrn=agKrY6=P?ڊ[594h245 5-ug5ִawhJ\¡NNoؽ}}OoFГccjk޿T#_^iUtMK։Y54f_X4{4k٩:u^:Ibퟕ.#ZFVhWskk9#ҥu5Z8gO8soJ\%סqk47ls*MRd9ob{g][%vnL頱új~ViřkdXcu5bָ͟>]?@TظjlӚhFkƻf lo=ƗRQGާϥ m|OZ׊b`6F(W>FB/\>iDi{nV$Ir>WG,Vc/V fi[ob+n'뚯4tbk9\c5QnsmWQ^\#  *puW<=K8-396ۥ@:7ikz~,IMcka4ӤkJ\x%&I'xiʐ6/篪,+' (l4iAkq^4v<VbKZHgp^?kbM~-Snz~ǹ6擐7X5)NVt)cuRCvM&Qcb>byD3X^'4{5~-ێG6O!wGȩv3rY9erޓ9ڟ/*:iTҳsٻzN ϒ;+DGE֐{>S%w 3lc?Xc㶆v{O.|4~+54Z';lѸO;Ql[5ji>{Niw{K}|F;}|9K k31r6|0_%o&s5Y_WcY*_Չի{}mC M?%kQsj ג&WlKUZReػjW}ODc{gKXɕ5;(ke{e',MGc    @*U3G`:Ef!RU}-1o'cGֽ.oXp$&lXs{[u{V; SVMwKBG4ytDo'bgLh:g#%*vb$aZY'ٙ2lD .N65jVR=Й\ 4nA^ͯ|L񮓤'hܓ7oJюkKmEiWM$(~C_pyw; R3o\#$֌K{}Lؾ]Ա}y2/PLIp)Mtn{UOʋ>*Ͻ59ԳffMQ:eNO{v_߿̳޶{h.|9BiZip^Utz|u7U8NwQtyYC^BySk{vFj)W!Fe)s6րާ)r^\CJ3N_UxLeew2+#f   Q.FMui˪i[Z눳4/6/5j+%&:5vm]m} ׮+Î4 zBekQ;_c -?93o>|#1DN5k^U'9z>__%_/VCN{ι7ڹ::(f[{Jhubsl:>{u&ngK#~1}X.KK^ +5[* mcd_݃"1Ziwynӯ3K\XY5n3.{;-({Vw/qeMi]5k.gB@!}wtD{ykʇ7<(uw{a"lqyý/ R&ROѝ5I{:iFj]?p꤯vl}Vo-^ v`W[`V^V|"N zbWhpAU8u%k}T|zk{J=Yzz4-ñ$ o8W5c5+xXümĻkS͕O_6趜L>cռ)nLG0b)VFi/!*M@K 󥣝!]s#˗+mY3U5vP,=PyXhl54hir4+eNRIMٚ{wh>S쀳Vihk3k/):C@ˮ;%]KJMND|rTl/)sҤrDInd_J|ubI*zc+o+42cr\h>34X5Ҹ{^6k Qc6 LI6;f+MػA\Mw㽫uUsik` p9>{Y{OIK9NzCk8Dƞ H ke4YۆY$Rد8_ѴԜyKq/,^5#5o0%Ptr۷/UC)'՚ꚁ{Nt26gO `@@@?<;kUّM~ns}v~z->#Oa_<);DzMƗMGC 75h^B]'63gIVNtu߉^~7 urJJ}%eym3zv9qS)#~.ҭy`ya`K }죹_PK#7b;j&%Pu2j]N igiStsPiAwB^`u4(KUGlJ:1YyM/Ӽ I T\&,U. ݮԭX_^Fn{CH5eH>/znإ_UN+;WEϯh{rsM㯏b?vI3DƗbnh>XcdʿtQk.ܢq7Vy^9VcS56̚Q}2i|d%fk./huƗDG4w2MPK ^+M.s_oszg%˚۝_Ekjܣj|)d{giMJ ^+mt-w=*]_w*X#ȼ4Ǜ5hfi|)7޾gv_ i&jg5kq^ @@@C`yǵ! qsALLǦyeG`Z쬟yuAhD:KwXI;*hYϖL.7esĸy?XC6L& q1ҵ~7`oeEoow* 1 ҷ@ysG}([j_-NW儲Nrsseq-{>7{.ރ^΋tʀEpWRzgNlA[/G^QiY=Oo|{JeӓȨKFVlL[n"rѾfZKᵈP։vD)@xjOw(2Cr ʆuzX]3<|B&tͬwaK/KVF8KlRqǀGg!\= zw^c5wT=&CcXûu2BP%JnKLٿ3K*4bgDJQS9JHfZ!+?K}[3C,T`9ylB$LszK PnMs|"J`rDKZCVJ*W~z`lK&H:wg^)sRsɈ*3vs?d_1u2+7vӞqۚE>XFEp"Pʒm *hPQTۉ  5;l˲ݫEZumsňwԑ_ 2/Xd]+ᛤFn`]<},|zߞ% oQv𭿥:@G5X)vY35Ö;4֨f ;55nwF횢·8FN]fK4~ OG@@@S$?~4Yr]q]Wbw$>6^.sH%_3W*WrF{!p<ϗz='9Sn1Fg¡NNoؽZnۛ"'1Qm =I_czgJxP'QkAxjO7>u  cQ4V g.ZߎJֹCȼakeۼ4+K$I&(O/"'Tbc-":ۚxZסmq킇5Ɵ|;ξ,ʹ.6U'H    Ply⌗{ogag[rPGHN{ndx4J(dde}ZywE~fQ@*v-]J8 P'TuN¯Nl @ b⣜vkҳ# /*ޓQ]*5Ml39 6ܾ^mtmm hg\)8:^9u"41+v/5[4ӎCV$MO  ~dV   @qVj"U:۳ijX˖ޮ5+5v5{vS#:Δa3^pn0kn   wM$j9giXϸ^' ۻDV*5K(tb8P D:o Y[Ž,+'i7vסOq\%`chI"@@8J&V_k;7tMntWpwG v(+c{3nF@$'WUՊbu>c   @`⫹m]mՖz&w+msev=+Υ{%"ʹ2%*8":jXyUaz~9CS9Psf&P mr4h<6|@˴۞?)omy%yI<i rS#бMv5I:3[@@(P9>YϠo" Kr.nI߶/Ez*oϗvt}N;k8 ~5f  G >9V*69iRU$Nr/%S`K䶮fU]5o1`&x7D:Uct,)6+{W%y(٭`ϝ^!,j;w'kw,5KWSyfopcBsa> ]k~=@kklmu:\H٧1+5wkizjb(1: @@,pȽ?*'^*[䵹hn9qS)#~.ҭy?G i,.Pu2j]N -yhjs.у   ~o ?߱\\S۹OK 1bw޹,z~&F;k'M?햓k*­2}^k.ܢg54po5k4v~">Uc .=yS}L$ov)~*T)wv..QsxtF{+?;⠡niڙG[e]߬ {+_k^,\   @ D݋Hf od /]w;1ަy8ۛ坅/H|L4Pޜ{#-CY'8eddʠTwtFT  5gg˼'։5GEI3# %:~k+vf'֊))M{UwEQG:s6Z㚉8M7>myv[˱W'Kmڢʼ:]zAh3Q-4H]q4V~f}650C)Fek5Vl5O%8^8K̓Җu{[75C5E:Hc?bxFskmrn4MEͷ=jk L6ͽ:׏)5t\;g56hi<_c^K]dG@@"^UrWcM׃\8YgT{ޗ/Wx Ud%#+C6FzI9h_|֥"'ubzH   @t̼pw$mm=˕w -_/^z|^2rƩP?^b;.޻rߠ"xx'ިhǫy !^1bL<4JT<@DE1"QBTªȡ(ٮ?ukz[/sz_S=zzWy*UVZV1(9hNK_E*7^1I$Il= +c-z&ɮIݒW'$"ٶf${mĤ1s<'d*b'I\*ImT1&Sg~fʩҴTDZQq!96$=O6L9zڿպ{uwN2ݏ$&@ 0V\|~~\,8';-8xq Wwf[q9)wq̷6.r3ޚb1.1<ٙoi\,8f.95caڸxqNs8kŸpTgb49-iG%&M:#I^T^?VEzɹI,P6rRs?WWӓz&IsCY>ytҴfb_E&ZfLԛNI9sNq~Lَe @;u@CԼ;!p79M&ЍuqM.˸⚬m16c_K@\c3⚌ͥՁuqٱ}e..k9FAI ]pp 7cF0\ۨ'īɋj;1yRIN2]ϊ [ݻtͨ?uzOw1I_T_`ɵI=7ZrdV+*}wR㨛 % o&wǤZݔQ7tGeUףt&' @ @ @ EJm<=l 5MG{ث~*N?0?$뒓=~dErmWc ::ߩ0PWۮ\%\{֭V7*,i[pdEҬe_a~YeIӦ*զ3Y?  @ @ @ @`^ 씣x`.yyҌ԰3ieys?8NS  @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @s-\ @ @ @ @b5K>!? @ @ @ /'+F @ @ @H`lU/*79) @ @ @ 0{dHJB}!#اM @ @ @ @E+PJ򕤳@=E+  @ @ @ @^*.~̐os @ @ @ @` +gWqW߷&@ @ @ @CX'yzRUC:!@ @ @ @J>t柵, @E.d?:h%K0V\<9`ٲe'/us#߰=,F^l|߼!` @`do$;`gefU @uׯS.sݜP缈/Sh\m ԍD͛  @8?+\2F7?:'@ @` (7C_,OwZ-s<#HXm!4D @BgYq۲( @\@~_@O @ 0ѝ*}e-F @Xw!mi6s g9-Ϗ֎V+i @c"pDZɽ @M @ pɩ{Ϥ5^3 @ @`&3Q @ @B3'dIN>~  @\gngM @ @-[[^Iws:^(w&@ @`(  @ @#r$/Hz=Բ2$M~h @@JF3&@X<}kmwk/No~w݇s3箭[obg?dɒ΃:hߜɎqye˖ ~g06 @`> )]G¤SuN&oJI~EP @ 0*EQ. @@?8'>xzwڡ.hmVvyX=?Q_=FW]M Gd :='M֙dO$' @P @^Ox^kҥkǍ7޸F_ugv=Z___YO۷rs?ayYtj'@L?'OI&{o2]IvM @ @`,28 @x>[q:Sk @ Pw>9Sw^#xr˙E @"hV xnkG?q}:x˟= l[?v!@! lud){[)3 @kU@~9 @ @@u%7xuy_HVNY @ E  @ @@II1k'O4N @qP+ @P`68cOR몫i\ N,jzz6#yo)3 @c+h?Ɓ @ҥܚm%kv{z;=['rD-OIwf 0.co]|.J#{E @y#h?o.%@/_Vevo5KY=z뵾|Qg1 @'CiEɝ72?!y#@ @|P/Wq @ @z9$<>Y']JX&[C? @摀 @ @)pϜv*v3^_le&@ @Pnkp 6l$vUWp7hlF  @ @ l-?+yaiRO|-Yh @X Nr+W\dɒ_r%}uM7s>33si*묳NO~rkwVnY)c @ @'MNL=" @ h.3u=R>`6Ek/|a`_;ݪaougLlO:Wg::a @ @6سzR쟬`{Grd?ge @Q`8,[vڬg5χzm3QFK.Jx㍭;섌>5 @ @` ?>?SMZGD#@ @>=wm-V6"9F:Z^oߎi @S,;ݭu֏~sMvo Mwܽcbʫn{[#[vϣ[{?N^#@/NNɑI ?7$$K>(A#@ @3?,@ag}kc]}&  @zXQANV|vbb_z~6tsE]y\ @WדK~_[&_dƛ{&Mޖ: @ @C;0$,]9^;;4^A}/9ymrHMh @8 J}1'>wMh!OlǴ^g1K @8 TAI/,S[xK_ok#,i8_q @ydyk֘'ëݦ1gם3qZgǐѮz.wh(E ЇC{?tL-@k}?ݭ<=5|w@j#;0$$HLU4gG&k[Ϲ5  @ 079wh?ry<,;|AizzM~g{x&~|8 @@?8' nM;06[oպs#n#-}oOe;n@OI͛!3U*#-   @ ,ǧk/C%ucǗ&Nt*_1?eus@ߴ'diəɡ %I&A zZtvo'gO3[lֳoO;SҴyOk~g_/ot @lC/3uݨEO( }>9!9#9 @ @`\;ge~'ӇwrI=дf OѾQ23>y쉭~:>wv.}em*pvI䤤n" @ @`֝]=7ycdO׫^#&sm|HѲ s"ku}^vVUW:5Vp Wb~SЯ:묳: I}צ[7N̑czllLQ7<*'mvz}鿔ԧxN4 @ E>'3ig&ULYd:릂zUcQ nzzU#@8 .nbS}RϨ]{o^H=8}<h 'ԺkZ/z38ļO}ަ|K|M[xo0F3'-AWzF @c h?a ~h=ypםn*4l6݂]$7I@D.Կ5 LҬS;ULnצ M Wv`z4K֥Z-]Z6nw[~ĞN9}"g+~'ȎN~ V097?,G&uc ,|zRE*$ @ @` !qcW.GbɠsQF^^[9s}ndF,=MA_\ؑ?d\{e2Yrٷu%笞^oZ_>ӽF:F3nܘu%g&W% @3A gpg(0H=?%uG[)8P @ V7zl9%U"iۙl;fF~UQ>.`E @]vX7UHNmӒ*[N4 @)/O>=clU48$ >LWWY'9 @\͞N=l ;gVo,_$ӈ˓**WF^iqz& @,0EvAp:6n;v~OO~2r}R6^O'gmyd9S,c 0Z?fuzL!a3IgƁdϙ<;_z[ ߮gzzO3^7k @ MͿJIrG&K&kuN'@kW`Ev_ʖo 50,۩Tz:'MAmG~*C@n,8źQwz % @,Bars-S_ߐ\/+:vɌ:C2~QrL_%Ml<>NRAVOE{Ƶ~ ꗗj2W>^tu xɩSڶQɊ˼:6V_vm6_֦n:{nopmۺ]ƷO} tl:[vٶ?e5fSF @ \MW/K'JK^T&fSЬhf"P$Ϗ;6،/It?-^t4_/ F$*BLj^ 3}y{~1տ;~; @@\/#usUmH-IDATSدTApdhnh]~ &lKQvQvmv-[[nykɒ/n zɝYwKvV8_F @x2-[{> ']Ezz|l:—%M>/wO탠͹i+z2)*M&L7'ժh_kkyjzFw/B܆}^G''/hg*3sm+UyLͥKf6[gX5ܲU}ezvߖ[l֪u4Ezʝ۩|3^GK׬YIGjn0 @ @HG:o7G~A,n93ZOBTF`*dXE$O&jUerarTrYrr;zjA5VO#y`R7>!F*,O4V781TyI?:)a7 0T?dk*L՚Y_}묳75զbŊ/l"UbU!"V[nehUxzelj Ͼ&?M:ۤ?#@ @ (OJ(g\aJaN/ 9y'PY_>%OL(]E*'B~z"'+c-%EɮI\+B_<#FL~\tCnV+#)?&KF*Ov @`Ta2iqo|kuWNdŗ~\K'ŗ\ںz[/gmmSIU}mu_LI~d-C` l#1١,$hgL{*WkZ,}o<Fl=bhgyROW3~TIӮȫ~+,d~IjOI~81/n/93؎%,릛njU)7Ͱ^w]S4VEOenISd8ړ8H@}OZ&WxWU*;:o@#@ @V@~-]}'\\F=g^|UVzҾD{z5Ƀ*Wig`=v2U_NzՓsݞU¤~_K ^3]wfm6~oS,w7G]8u2u;}pU+쩆]u @Gxkk9iFgztm} @ @(ڏF  p+zUϒ*ܐT;#Wz<(92lwN1~q{eo=O㒭G&%OMJ_#@HZHuYg"SU9,w}i^]OϦ]{տͰƧuَUDoRO7ٲczMO|DVjX0;F @\7=acr$!'#XPmre'D/ᅮ2_~5M&&+jH>k{W϶kމFE!lٲswX'[?llN=[7lU{U&nu t v}oЙɊfک745okWjuG}VMT&LwOSo߫*܏s)wiR76f? @ @)d^%I}=kwUWߜr~& z ɇ*rד?%K* :QŖYۨ_6aI.I}@Aj  @8 Tᯊ~AzY]i تHl?E*W_ ML5}N6ͯmYь24"}]Z]o^\, @ @S /04@S_nݵ*ڿ)ayݓ+NP_Op1yJRzrH2/3k5md514ukd:;Kᗴk*ԫHI/˱^]Zڟ n ]ΠQV,ټkثY{^ח&UQROyk K H*7jUF @R@~.,dOd}cy{;S_2_6Ogg;(Z=Yu Wzls;##7l]W$Mw @خԿ-3moV;5nL;4ەI-I{;'lB#@ @A@~c @.ݦduBTfSTXtoTR+ˇt~w!aSܯ*w>g沺֧@ڎ ;]7T! zF @,EEp" @BMw^" g^{s׶u𝩷tNw3)wk\A= @ 0F @SҕzcF @  @ @ @ ] @ @ @ @P @ @ @%Eo @ @ @ @@ @ @ @  @ @ @ @E{ @ @ @ @ZXw-n  `VZdHv\0'DXs,iN^' @ @ Pù,\rISO8޻vۍ,|F;FQXui?~CҥK[sa6*'%@ @ @ @`8éXVR=n8O?Ĺ_xᅭM6٤=l(w\ծ^l{#U|t{G47; {Q}٭38cp/袉ӟl#7m @ @ @ 0<%۔- (p,^]f:kE׻ (ޘ,y~Z~P in&m^GL`P @Yݭ~|q` @` ,}+Ov`y>v^G(l7]ۯ6<$ @ @ @H@~n/vudˮ]Oor ,)ԵVשlc'_1I @ @ @`1(U$:.yk=49?rZ{%qzMV="@ @ @ @` tSk]Gqs_%$5ӓ_wvbY}Fe^X_f' OQF#yzF @ @ @z/YeiV,OH%Oaɰ Tu52`ujLrx ;%٦Ǽή3Ugq @ @ @揀MY)V{SOZL*yArm׼N^ 8:blbvc_( OH&kˌwM6S? @ @ @-]ݲ'IgB1I=is㟝9񣳽 Zg1sbVI3O6ߴWfxdv @ 0gs @3`l͍ǷWW+ QWHus7lTpX3ZW44 @ @ @H@,S}LJxَ#:;G4~s[~[-[,6fVqa-&L>4^f @ʕۺ- @ 0B~\~+~˼M2s÷J}~oYtbEIO,0?`,;fݢ8 l nʯJ^\۵Ge]5d @ @a ,YyBE0l @(&ضm9l~rq{Z\t_馿u&rGk &k`^&@ @ @xfiBh3ULjWǭ};kk Y}%͵n7YV@.}m|M @ @ 0^p~LUb&UD\s.5M/0_G$%&u5KJ ]~D6K& nf  @ @ɡ*zVQU$O"{m;5M/0͖;k#P  0#_ff% @ @O/3A|nW97'?M7g6$35itV$@ @ @,_9YEυɉTn97&g'N|m dC9f%kҬL@~gl)_$Gl|!lS#@ @6Ζ1׆TٝCe W@_) 0&ͅ} @}W{j:-Z)ڟ~A͠YgU @!n:},YrD+ +W\lٲGyhel.\ @ @ @f.t޲B)Սu>hƘ ʌ\95 @ @c^"!2rKRmɋk:lO7<;N~\6yhR>ٻ= m|$(,'FwrM_ou F @`XCy>y}'\\F=>3S|b4r>jQ[> @,W,ޞ|796}_/*/]=2Geߓ'OIJMMvMkW~ wJ>M/J<8(yruC@}kZ>@#@ @`C)8l @ @BsNvnU`lcf*? 9+T~O|=An㓯$ժxeeITI=ů @ 0$!w~CN8s01\`dk,ϖGF&e̵@}} @^ ۻqOMɍɽzIwq?]ӶDSW#QI=5_oZ}YOkwON.Jxʾ0$@ 0p\6A$p@D~>]5J` /[F 0fۧh_O뫀~+ɠ^j;S;ef=b~Z=~tw&@ @`f3sV ,iNDˌF @ @K3~ ?z>m>̎yuVEM}{,Q[u>] @P~8 @ @[]zc[}'*HvM{dצ#z#;,⟕*+$HL\eRohڝ3R @ m @ @ z }'%'$9$JOM*yBrbS2hmCo ) g&/pRI-[TɗS F @`X @ @Xd7w̨ϯd^_*ίT{yrm>^V=)_^58:ʌm%uCoG'W%OK)ON} ymrI=Iߌj @FgFϺ @ @n-ۭ'o5UE%=z^uR k*>SRV &;n2#kF*;WER7 T!w&;'v*k @ Q@~6E @ @`@*k*wzb{"L5O6S? @zY @ @ @X@~tV$@ @ @ @P  @ @ @ @gLgE @ @ @ 0;EYS |/3tdV$&mLTѝOx߳ݷiLvlOtYn`O;'7HYcZg=nۦ'۾~ @ @ @`ti;M̉-{v^<Ӻ+3r^rM<==:YlfcNlZ{W3Y붝lX  @ @ @E*hH/&@`NNk拾[n6ydYԮ}k5蘭 Z @ @جDR+΢c2}srPYriUerarTrh褊${''joO;1dI_zZ,ɍɏg+_Oj?%W{F:o5t=1~KycC&'+$ٴN:+sjSٯZŸ @ @ @> @`pKO΢c3IBxTZ=sI$W/$8 Vr'g&;%'I^^E*'vIUju_IvMNx6M6c߸kF'g3rd5KXeRdӶqBorLnz8*yD2}fi @ @ @4lܽz I*h_}MTέ7;5ߕz3'U4Mm U"{$&I^<+9$'Ի-RW%ngCIYƂ=Lޔu$y`2}fi @ @ @lqK @g%U_JjUyÓדk_\tL3 k?'%Dx.l<%6 i/u,*oO~4 .j:'G'דSy"ybәO39'M:+6ꆇo$K4 @ @ @PicXCժ~YRͤZ=U UBq=ݴ3fY֯W E*w%v_3 ~L 8C{:tܥyg[>#9<%/M:o*ȤF @ @I~~~TꤞX!vFrcRyPrdo=VSt]^i &]ݪovg9UjVn1՜׽ZlyDzyV~\RdQIk @ @ @VlTrjҴ*'yeR&iUL'TAnl|92V tlpg]G_3zU{dצ#G$&ܦW;'e״Gas~5/9!i{ezYI:ן&N4 @ @ @GKZ/UsI @ @@@~, @`M1N?kUSROO)fXNPq)ד*~ߜ̶5}I9MǏUE*GRF=Yl$y}deR^nߴdCI-~ROTVW׍ݭI/Wvձu<&zٯO @ @ @`Y3][2ɺTzy{Xj\Lݵ\NVy*<=ۻ[iZ6 ֝)oJNvNIKζND9'4vhޙjxvRO9*՛kwFGRo*"8"iZ/f! @ @ @oE,Hy'pq˫o. SS~Y,qT @ @ @F"t$[Q @ @ @ @`Z 1 2 2 5 ώ@IDATx%U7rf$9!tUw5Ąk¸fa5uMQל,".]8ʐ%I( @ @ @ Cw[֣ z{`:KpƏ}qױyʮJ:I:~?1fZ5h_'N( @ @ @XLʏOūԫ O'wH:~YRO_ݓNY7+59__c79 W';kΗvϒHՇds^}2~Ĥ]˒k_'MڥaRm.NTN8/cZ^]&svmMc{VB @ @ @=&zҾ&OMN?&krI=}irNKRIM0$ %5q)Gg7&'uLd~9IWoxsR%Ue}aǷ>YVT 듭*HV$g%HNjr7ѓ,RҏMC%w%5q~u<{UR}LA^琞I}I]gˡ  @ @ @@M{MFwRj}ZR尤&`\nYo߮ z{Z|-9|vYvխ3^W$v?&k^)d&ҫu*MxI'5_I~#Se_$XGfΛh9a-˩iqu+R1w&'sʽ^֗ :*uoL8B @ @ @mk$vMVB]S*IMz9 :S}hmJ[ۥpxEgҾ~gSK:_$S7K5 }&￟yd{'ݥ&''Tq{ݏ SwMg~24Fy]jj<l32D=0Q @ @ @ [˨k}1fG$&{$5۞rr,)%5Q\_I䚴آY>,*vw*j\U4XS2)[$'&yNEkYck۶vjwoWvݒ&+q_ԓR_¨7T{Xe2Vns<W޲B @ @ @$In>6 {IMk IMl<=^rv&t^ߙ8{*>eMdEuYYyCR"C6W9ol7W-ʩq]R?"|!Jݷ&'֏KNk_vg[܃ξ㴶{>wY/e|6Ir|,K @ @ @ @ z}gҹ~( z~eyR5{RNMj¾{S*O[fg=9K!θjyHƻo&>[R&k~ɒZ_?ɫz|M?1)ת+nqNR~aIٗI=)_e*xyj&ˣQRe*N;Zl2Ag6Ook @ @ @ J7D O&`+LtKR&<|,Ԥq=^u\%YvXUwe񺝲c'Nݧ_L6K39=}3e;gYISIUtdeI]Ov?~L.O:UNLũ':mt:S:rcvO @ @ @LC`q{z mZvJMo_ne=~{!YO:YݒZ6ڮ]uIg¿Zj^IWg^ϰI @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ,\`?.Zd ;rʳC.]z p @ @ @-δp¼XhuC 5ys  @ @ @0i?Tc8:o/q  @ @ @ 0&g @ @ @ @jVY!0=`W/Yv @ @ @YOڏq @ @ @u~ @ @ @FV:'@ @ @ @Q0i?wP  @ @ @ @`dLڏq @ @ @uF}O @LY`GA< oe xNrVrC @ @#@ @}4-}(0e?qɮI|#+?l soЗg瘳''˒'&,["+٨ @ @` _  @ @`>%M|2yHŤ&u^RoXg'I$OޚT֧~^O[ @ 0:)C44 @ 6|+yVKcY)r(Ӈ&*_K ?I^|$Y( @ @`9%yP);d伤9yGyBW"xRߙ~| vv'\.鴛՞e/β3a9gIJcv/Kk6IlN.O Oߤ߯O  @@ @Q&|)9* l+W6j2L?.Lr=Ǘ IGISbG)}D&u/I ݲQU^ߪ$$ǎ/e;_M\2%f#+|]7&$Mi'  @X%` @3Ě(ʊgIM7'xiY.M,OI'jUv>,cɷ*5yr&5}̤z}Avԗ :}NòqR_D*H>Ԅ|}UIJCo'U5߱;%L욼8/1<#$9)92{ @ @U&} @ @}XO3}`wR,Oۓ?\U;d}dqR*o *Jdx)ge뒚?:Lgu՛&{~Q_O|0+Oj!IMדINO:YI[]J}3a_'$JJaZ_hW>() ]  @,$P/+ @X5yYrI>͓&5i_OTIva|&O<*?ZGVu;]+N_˲g~MNvkuC*jmI}[$5zbSj"NSOWqTȩxilƟk @ @Xgaۨ  @ @xzFzQ_%ݻMYO\%W$/^ܡG*[-n67Zs&oL4ʾ 7$%%wN6MtJ}ʖc$[]{&t&z1dQ @ @0ic@ @/LJì ~ǡj&5~riRٛ% 묮*+;+Y"%i ?#yLa)J~ork2 IM쟕,O^Tݵ=N/j}Ne$_NKR~>~PMw[C' @ @Uq @ @I7z|qROtrI}#TJWy%{$'Uu;Z[OUͤkR{'/)IHjdm_J|8Wׄkg&UW%U ?Hj &wLjdmg9dέ/8ycre @ @U&} @ @9ymI!+5y^OT;H4|{RO?:y\RU,kd&;_߬?? Y'_|:'\TdQIMwJ=^xū|,˦&H>Ԥɋ*T~}Jg\+6WYn]ƳAr^rLD!@ @^B`\`ɒ%#˒=-gYڸd @O=:9;9==]4Md=C @ @ @ <>/ׇfxGݜN:7JSOmjf]wڷO^޲e˚g=Y[o=k~|3nÉ @ @ @ @`vL'.KV>wk: @ @ @ @/'/6}Vy|5z54G @ @ @FJAm{¾֟=uӳɀY @ @ @ 0wMړ?:]{ÀY @ @ @ 0M҃g_{~E=k$'hx U @ @ @ @xIz]U'AlnFwz\辩Ek} @ @ @ @H|<$t^}Τ:\s/mHn?z  @ @ @ 0rI;<99ꟖfF.9~^rQWl+ @ @ @ @`6IOړySb%Yn]t0 )"@ @ @ @' M4ީ_cΛпl @ @ @ @`(^t&'[0u{\O?99/_}[b\ @ @ @E}i3eF}č%NH֟'n @ @ @ 0?NړSY?' gݴd*hG @ @ @:> U>+3y9W@[6'fm5sB @ @ @Lvri,l29:;y @ @ @ WJkSc/'olR2]c7I{  @ @ @ @1]ʤsܾsEkzmTy9K @ @ @ @ |=gm2ZIMSIקZ"@ @ @ @''o+H_~+ @ @ @ 0;M_} 6IM&ϺD @ @ @xHjzM~/Ok=\;&4uzC @ @ @xE'mks%? @ Gν6R @ @`L"^Qzͥ]{>k7ZW"@,, ~- @ 0K56Kr9{=ןcuyzAn%@It  @ @ @Ko @ @` ,y:9%ˎځ|mtlǜ  @H.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O&4 @ @ @)#ut @ @ @擀It7 @ @ @FJH.%@ @ @ @$`~>Mc!@ @ @ @0i?RKg  @ @ @ @`> OwX @ @ @ @`LڏY @ @ @O̷ͧ,Yv|l.@ @ @ @@O5gy̓_js'j^W?/jӫu~ VX|\=a<u6 @ @ @s'I_ʙ>4O֟QG~_qL+mgDdzgkԵ^vN^NsY=a_p G?ѫ:M  @ @ @ 0{tҝӝ]>dɒvp{e뮻I:ۣq. ̶J7 VN;m<}6U @ @ @ 0U @ @ @swK/١ ˲]骷9%-\5gݺm @ @'pɟyy[kʾ{6S6K溮.IMshk[6;4q-m4ɗ7Տ4޾iZ/Sɱ/c +&>Ǟ'7]z}weͅYfgeLg4oąǫz͝fg,xnfR4;  @*`~=0Lޛuײ}]3|^NV~^A dˣ{r&gw]pl׻Uo @\`uMM7G^4}5GI!/h~_}KӼ7^w?h۷itr< wD2Q_8OM86vhtkd?4_uxl>|1wi{.WfՄ7Ny7l3O}.{l_K{~cKvi6X?WOf;?ysvXϾy vGsU^wظ;k>Ki3T @'[xZ7 zף$OSM*ߜR+,uN˰%q/7m{Tվ7v6,  @ @{ ƞ?7NW?7L;5Iߙ}w#QLl]?k>5ߧI*2حWMNՕ&-jrVǹŦ6 /9k{e"bLw&>fUT-<|MG]_]5ݦO @/&¸)MSϞzuwM&'Mjq 93О2䜵䝄s$Ww\?Mz'zD!@ @ ~_tv{XWkq OoMozLmmS=uSGYL.K~5U3orV on-Z]W+*T~S^c.;srni[o c @/`ͼ3ru|$yWS_Nݽ&*KK[,2lfGf< Wշ&Z0oR5_In @ @=j{+.Z{J=ڦߧ߼^ry1.hͶnu;S(_mX:iu}ylNv( ,V\r5qѥcu~~㻏+r[ MPQoK{ͦV @f*`~rc׀ߓtku[%5yޤ͒$?\ k֏:g!a4뱇n>&/%rǬeOɩ:}ln^?G\/ѹ#x:Vi0lfyTyp֤{"$}w9"Oa @!>Ms4ޏl?<;˚M7Z4Ͻy.l]ns6}Z @0i?sZ N5y{`r +iWZ.PԱʘ0ջ^ԓG%}3۽Jm&k: @X=>{n|7Ey?4?3k&_'f4͝2sCפqcY?}X^"wvɧ/O#%:c[|u4/{yk:x6hQGnj3w͒.m9f|t}Ǎ7?gf>fͧ3O\pk'dڪr[oؼl6p&:$@ 0 ty3LqtqU˧x|{W|Z›rB=fyaS]xSqߟ#@ @>,Yde%{Ǟt'*56E~~L^6n|I^m[3CX,;&g|zh˯\\W͢E)_92s'ث3λ.Oli>sC\9 7\~n @}οmxkh/Shr?&s5a_ ԗLYcX#@ @Fk'f禹95효n.*|u[u]v n}=v`'5 @a0i?;S_}O2ֱH-ksS~˞2ɥk_(0 fIyB @ @ @0 xrNicL4ġGOd&9me}S慎;L @ @ @ 0@2iz:~gp^?OXˏѭQ)k t ɦkvyҚ--a' @ @ @*`~z#k3*YOљ94s\)k Y7$?7$ @ @ @.`~wˡϞ79z}=.٫a Ofߟ\2Ë;;sF @ @ @@c[NtώWCdz) YƳ~cr`m2r]S91 @ @ @ NS=8 ֓Lnɯa)צ#jNS&f'HvH797Y[Y_A @ @ @ 0Xk"yVn=7$HX{UmK^:lN7WK ;ӓ^eTu @ @ @̞ݺ=!LdzuZW.0jfeEY֗I$RoU @ @ @%kȿ|6BrA2Q}~+& ;eIMחQ @ @ @fYOM\<#c#|1pVHnIZ_!@ @ @ @`6i?tѢEG%;~+WtxAf-)2"T0f-  @ @ @X`A]LؗO}3(N:S_2UHf K @ @ @s/0I¾}{295clg61hv1ٸF{L^o  @ @ @K``5L!@ @ @ @'0ߴ=%ˎ>2>?3fp @ @ @ٻ2 @ @ @,p `S@IDAT @ @ @ 0w&ޕ  @ @ @ @` _'@ @ @ @0i?wL @ @ @ \> @ @ @̝Iwe @ @ @X&  @ @ @ @`Lϝ+ @ @ @ @0i?O @ 0O9Gv:N @` _'@ @5[%{_?j_ @" @ @ ,Yv/ ]` ^uU͕W^l1<: @C.I!AG @@Mp lrV~1i;L5|;u?E+V1 @DE @ʕ+Zh3=Dg?٦^[6tP2}oO~,^9^cκםNus4ok7w=qBR/~lfv6c  @BBL @ \ kO~U5K/9餓nЏ ,_|Մ}u%4_lڨ{\o ЭĤׄeerIYi~ /pՄ}k|k`9 @''  @ @,]4@~)Kӟ+:CGN8fɒ%d#y'_ooclקdIrA g4e d_ܼ{V_\  @~ xҾߢ#@ @ ҥY4Ft]}swՍK?&/HzM1wNR9i=]>?ꪳI @/&¨ @ @ 쟞=wy]uüYW:]_;eIޒ2ɩ`r`D@e% @苀I0j @ 05/Rĝ:-^ߵ_wWݨnd=:Sk_÷K'/L^)!pqܮUg @LϘΉ @ @xHz޽H_7&z~e,POa) @ @ @ͷbЙjYީ ,p۴wU} @ @ @}x^kO~}(5A:/IKgKwdIvtB`/ڟZ?`> @ @ @3>]'YS;f˺<6'g8{)03X/'tǼ筽|ɼ @XgJ @ @A 5u{$NQ9dt9oca(I'k+Sokk~mfGIMOV3 @ @ @ 'WScOםf6΅_\tuT{[=簯.=&CM|Mշv?fCٺ,xE c @ @ @`Ήɷf}6mӿɖK|QhtROB+fCȩIٸk @ @ @fP&찔#ґdT9;f{Z˓r/Q̖ǒ( @ @ @LS`Nri7ßƧ:Q}܇ٱ!n{ 6;hd @ @ @#P]4=Vݿ=1?]yO 0`D~bD @,^5 @ @`~C-ZtTRO+C"rʳC.]z\KIɶ0q\?_hZУ 3c @XoD @ @ (J.[ed&ӒS[0$^|f|Jyxp޵;W @LLD9 @ @M'InRgc8Zf&Y=1Y PG'nX(f;;:k+=%@L$ @ @f,;RWr  @ @ @` ,6eGvV8>9*I$.8'tV-  @#'`~n @ @-pF8fg9 @. @ @ @ @@O=YT @ @ @ @ + @ @ @ @&{$@ @ @ @0i?xcW @ @ @ @=LdQI\8nR(.w+[b)Z=@w?f2$t߹3)myA@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@@@@ E,,D@@@@@@@(W1G@@@@@l,sZaק&Xf>8:ovH;ϸ%  (P+/kB@@@@2M`+(H.)zGۼEv.MbƐ9m]}ќ!  P+]#    .le>4qEC; shD\? Ep     ^}δ>m'-u;ֱ7vk;7?cϼ;Ӟyz}lf<LloѸر]uRkY3>f= 6v׋?Z5츽yG^f7>3վf]nkuc9m+Tܻ'ا?̷|wgyo޸u1N6M+P@@9s     n^bZkԱU5/_ςl/ ;io.hgԶ:&؀/6Էsݗ?3pv^h}b;*Лo3W'?>:?vclWsmQ{4)39wNX=,ub;nfM}ozsYmW+1jql=_  o}A.@@@@H+Ole"v*稗{KG}Lw?7e ujW&Xf?_d>ovhn?H^>6;.:}K_4O]f771pw[ֺYMpgGkܠfl~ Fٷ/M'={_.eTtMuF@rI}.}\+    ~AkڨW3i6blP"VZ6i>՚h9cvlyއ޼xX8|"}zvW}V׷ { &لi ! .@>׿\?    @nO5ضv|6Z3k_}s /,:~ϝQ,{K }ywnayQTϏ/Hs鲂؍O3VoU]ۨx@ sˉK"@@@@@975^7XLGOoE,d[K}:zZ5mㆯtq}콂|yAlv]cݻ]uRkkP ME@rJ"   Anv{g_sKqc~>T}e;7sc]sk=xل?SgǞR&0[Cq~\7֡C--Rm6h+;~-XT`W+Xk:a7bQ&Xf]7>&>71gsb+Ole7ٖ}9X@rC9s   drL;zϦ6vRU]l{>}0}핏fYMNheCAc̝~``rMՓdzmv:Z-c@5s    P˖]$R#G#9̙bvf[M7 /5;Ȍgo~)*թ+&v5  Cxɳ=ۈqKlзlaoum_Ǟ]8]X6m5#Ә5K}nl6r-ӦW=29rdJ,,@@hc8   @ ͦcFf{cIj͞?f SwU^LIo    U'0yu ^?G0G6G` ^o٬|zKf/·/L|;/ֽTׯG8[Y5<~|'f~^6[gG]O35#4n^}t6}wW~> @K/li݌c~^ /w]/*دxVMkYvuVٽPz痰Y>o  @ Ӿr};   wwq|/W5[[O)QaY[9l7f;`vsuSgcz~wmgw1l 7Levjs.lfkn_l5u@&+4ŧp⟃ 5+s   P"   @u{4ԐлJ3;h_/|6yg10gi=+f`.ͯn|qmا}ouìf:sË >Nf)Z& dwPfǨn>4 @@U}sp@@@(Uy{ [_*m种.Zuˢi~/;-KY|ʇ߇*{ <f|ͦ}3i+߹eK^.߭?avGfg  @ ,0@@@N`/i~aOF-kآh/=Ѭq+=2ۨhSy+bE -ж6$42Of\ sf]zÙzF`_DN@@ (ҧ͵"   i^gA BKU4?v&_fiq97!QfN0yw F2K s_>fiv#ffg        @ 7i_:N7ٛ}3'je^`6I{1;JC/7xW^%#m߹OR_7 yƆfkn]|sY%QI5NjU  $@>>M@@@lhlݝ͞;Ork^?QyoW_1fߩ7W]]l|?=ƍ}-]2왳9>n&f'e5jO l]V| Y ۾s>͖O&hQ,_?:1H]? dEL8G@@@rQࢷfOuF i-Zom.lB.iU\?19QzvBf6e2]AbkϜ~N~ fɋ KW>k{lޥ};q6a@C@(=     @zXA[:Z6c'N+zgmĥvzqpx4;Dm#bG:ynd/e#-g{6i[ݛA~̲ˎoW@@$i     $hq<|]{?lnvܤJE & kډM?sn=fqؼ5;Աkجy>~5lyLڵeAo`[^@@}F    ibөm}prukװu;%~oZVV6]V7%Oiݬk>;yio=oŚi  (7mU#*3    @ş]=RO.ձ{=fI0uxEӚ6wP>6_bxA|{Md5LX %P_iC^8ʓJ[}}5-|/|~,\Eon\Zk7Wf+)C eC%rT;mPgߣyOʾ @@@@)=5a~7ϖ//*; 6]9틟ۛϵZ5X_l]z:ӡM-􀙡5D@HF^wvDӇ*)k)(|c5ŏ@/W3'JІk}Tǫ7uJ_+zLWT>VQF)j|x_r7 DC@@@@r~P ivm=qفluLz4~̲h* 7̱ډݛY%Is{Yvbqv-~s-g-2s  e YּY}MV-AoxWMEB˽y&6*.%MQ o %9Gm@C@@@@<-ɷԻZ&nڶeoZ.h]: YNJ F}jx5Kwl<ZS[]W'  @KWzc={{PR_Tx]T:)E~2R潯SW)J\YWˏ/()͕/{h~t/2?mwM\lUF(W1VzKj>,?hI ܨ F2籫w-pJ`^V-um W\    EjمG ji,3/絵6Ԇ"]bϮh[B?zϦ扶YآM׮ols/ bb+0  @׌]QX ^$RWK{^ 3_2xov/ ,SW6VJp^V_hq+M~"oؠ!      9'PZa_it 1*>͋޼W/{q :_lTk2"H>Ch_/+kr7Q ƄJ(,mCo] #ӽ4Y9UYvQmf|oz+):~GqB>,4      @iE()^ ޼=LyazJOSeEoS[P8Ň+5]{z0נxWhy09N(>FmrZ_Zg"kG%8ZN nndgvG|U^ॾ&| x/8YIfzVW)Y@@@@@@F/ O^0El/,wQ'3`L ?[1U5\Ouʛ_( M([(A1J`^T G'}电7Ae<7$ 5 񗓕J%Vg>t&B@@@@@@@\^mӴ{rßR+*++S/P>Qx|S{o| QV*-j?)}%k +* rwUVyϕ۔y@q7JiFrV+wyxDdNw򔲾u}4Wje}֥m7s) yE@@@@@@rM}~) ;D))^~VrϨ]ܠx /zWUVU'"ŋ^$/\v]7_^l>e:U(-xTRx/{FYOuyU:IaxU>U+6Jol>}ͷ]PYxK`NZ ߰rDW@@@@@@@ J*{O쒚4[Ml4S(^ 7/?W|xה!^/QiCR^g{{9%և?RtTUh5UYC &)޼W}C%]+2U$&kFyxnɞuڨYss Z2NnYuWM$ @@@@@@iA<n^,6/~.yO 77Wpb߈PL <§Pg^i@@@@@@@Rzz)       EUc[@@@@@@@VA*)       "gگy-   d@gp) w.>>~=6/C@4zW-;F@@@@@@@tzڗû    @AA%?{g .=6|VÇΡGZkU٤b?鶏sv%O|Nj͙3Ƕv[y睭F; K6>C[tu6`̗C`oN;vmWֹ,7@@ :.E@@@P䌒$_}ϦOn ,ڒ%K ^+fBLɐ-i+׎yL}6e[h 2^|X5ʗ߹׺ҟ( 1@IDAT{6x`߿͜9͛g%9p! _=vCD]w0 @@@@@ 4+WYy/[<'})߹E=Za}$rue5ZЦQﴬ}wɖ      @v -,ײ3ңƧʩe@|֓%z,,V2v㕨ߤtԪmc[ %mck@@@@@@`JT\{X+='|NmiYJwt=S/v}hM4=@3 =c@ (g"#~#DC@2 J`OvP( @,u,\W6,Ϥ5 !Z_g+')PMoMy*r|T#8" @z >>2S`ûu=zܦM3Rt֍g϶Z#Fh)ҟOܞ@H$DLۖ)W⽞NJ9C{2n9iݟmOU@Sr~, dwOMC@\\Fr qri-h  Y#p TтT-K`/9.Eky%:M\ˇa)G``~>ʌȹ3 @98@ @ݴuOs/ܻ  P5Ç? &NDnCuJC\oSK9O 4CzОIvVvU ~] -=>im $?*~ @H[ipb   ?r@e<3"1dPq{mM ?~O7$9bhn+)7f@@ ("@5 \uΑt9sT@@T zҵ#;^#˙E~Y)? oHBF>ժ@*EgW +;E@@@ J`?YP( I T>]O((#1  @Pϙ E@@@ )]kJ^{X PiZyoW˼6{,B@@ (gG"   @3r)sG3$+\+^,l9D3  @ P D@@@ ) JK4SYyY@"/i핑k3VG;_ٴ2¾@@HV}R   @ KFY3r5hd9 sGvTWO((^\Ou[O;c"?*'(4\`#o( $4@@(W5B@@@ jnU+ #gJC*C`v{ܞ`gieUtё;ec@N)%3yf; TE*a    @ 1-ӞT@vyA}~@^4Pji{WF 3u4?D!@@(W1G@@@@ 6 y/½"'X+՟eOCJUh[W~H hٵ ('#]GO+*>2 @4Fˎ@@@HKctV>}ٍ.JhYY@~Ӟ9> Hyԟ3> Qzߟ[cd=A T"89<7ݔ2N6O@@PO #;A@@@ ʋJ~y~"U-0K<@Y)41bbM}06Z3^"i$߷kJ4)2 J}$   +J6X"V={(o0T?jpenּ?~`&|}Iɟen47(Ah$_oc"+t7TBC@R*@> H@:׷7,;w.u[l~ƀ6n¤'X|~ScƻٞdAUY  봽@{i{" Nb[ɴռ?'|}Unރ2= o|yV,g@X% @ {炒[~P? [Ybs ׎>ߙ%SϹҮkҸu6vD;E   d :Q/8tX{/վ"&w{w#'V[*}zi×Lxފi̦G>JoxI@@UhʄH߿y̤߮OO_z][{_b~ix칤Ê  i'PKgthC #0[gzG)s=~'eP 5޽^4XuPUE;Z_("˙E@-@Ѿdl{e?xvٗN:2I_4r`Ml6N @@^#gzLM@tb?X9Q.?ڞւ7G>>J MpU`l h @ ~73~k:ulV-[vKxq=~ߍFxٲe6֩+1  1[LDx} hOȪ"i-·?NWNS'ȿ{E6FWZF3 $-@\`5Vjҥ ϼG ~]툃'\o.6r8 ż" d:G!Q~,gTt^<}N^P%! W}?Ўj{D|Z$  EX>nZ6_}Ut) {C{Ċ! T@mGX"@ UQz+;ݏ߰ij߄@K<Ϳ[}$pGK.W'xE @6"nP~TgvfA9JhR{* @@ 퀍@2[_툓γVnW^tH9{@mt^h1% {!@  R~\?c|rqd9JWueNv~@@ 7(U#1)Sy`;n}6E[n  d@OJԼ?krf@l Vj̝?_A @@HOO@R,РA͛N9cm{ivh-c@@ WLp&{H@\g\ܨ;,y?_  @POAJQ#{j+LյӶ6pg$*az_dkh @@jXMGޓ>nӂ#9(pٟs뛇cMU'щ  @ZPO@ <[=v6荧y[CC@H{u)^9Ey5Z% yAJE&w4zjr?ŋ-. mQh~_'  .@Ѿ??#O @@@8KC׎\0ϿE3wZ$VM4Il*DEE7l_\毓7}͇?ڇgw{Qdo!Pk +*C  H<28    P^VA'Gh{nPB^Tv4+9sn26 &V <˔o,R(?(~#ާuGl@@ (\K,((Wm/Yo~*Μ9Ӟ|I뮻oB @@@ :>ls#%ki7 4r2+[M l&}rfw%ϖX5˝#O+e2{6WIpÀɟݦ)> UZ\Mx1?NՂ /Ed@O}}|E*^>Ȇ f ,U>>c7n͝;}=ztrOu{C6@@@*&p6lD r;i_\\Qrk.74_׷!PQ\{+-A$h> J`a裉/|";Gsf\ hbۍJ[!֍43A^>ʎ7I)~  @V Uf:P%ng;%ڞ8F )7$Jg4=iTT`m)?^Q~[%h4#Q:4@Rzg[兹pͿs^)KhjʟIC@@Y{PCVPZB^cYe(<LT%/LF >be{[\D;e]%\׬l4*(J<}cͿ| Z;! @ P϶48)l}}U­fNm   ĺ%89^~A{̦No2fh wns!˴M佑!,{tZ@h~2*)N5h T_Gh"yE@ /f>/HW+4w Ϯ@@@T tήJrs[=&ؽғ8,d@*Q&SxaNZx<ߐ#Xq!~hdBC@,hl%^Tߢ^|M]/2S5Q[M KcY  RQ.yއy?e}S2$VA &7 dIb7U"J~CR/Ħ)_eY{IJr]VC4J27)mg +@>s?:s?w!nhgxoPFzqNԼ}4/* yE@@PIN5J^ĢU?zX)]EY꼅@gq\WJ2$wJu[&Z/:dܕJtb+ft<  %䃬TH\髴Q{wrbzي W{/i   @U M^ *irՓ[zWf@}KyJ٬5-YѺ{cZvҰ8Un]Y,mj&2! @x@y6+\ '))/*XJ,Vq>ھ{韮 $VA@@ ܣ=MmoƣU@I,j,BB}lhYj:+xF8"=+xb}Ǖ)Ŗ&?SɊ@yfJ?eNy6,\TFG>n@E}|qϕ&*? $q0{} $PPPpI   ރGKyo7EeBwtC ⾌@e dwnM\UN /?xWVS}y]y_IZVIjۇ+p IsUQ~T*wmY)CK[HBG_~k}v|x@@ N^,Oi+‡\u+g=Hx um):4*Cv#n< @@ G_˚W6V۴ 8U!P@0;>s{.y̵"i$_?c,zWG e:U ܽ?f+,nkψ[K|Ӕ h(jY! FG2vBzel~78 @@H] $ZN*L"~SҾJBJt]e@K]9uY\  '/r3߯ Mb㯵?_Py'eU C@@z=//q +4(((ѧORv|RFK)';KB\HRs)dgIdw.g@JQIe'iU ϔђ2v  l& lVέ_emwdoI|Jaye ,Y[@6Jf#TE7Ά# r!ܤɟTmP>_IP/@@2[WgM^W6|;|lZ\U'<>X@ 99Z]ФQZ\ WbDŇ/&pL@@@@@@(&P3$(=XHY/}PgZl6ԼE3  @V v]V^W\Tgʩ<ΥB;;W!66ZsǦ܉X!+6Js@> V Jw%] E|- Y/i@@@@@@@(Wxf/\K8TZϐ˙Mr@@@@@@@*h_%=LWV^?ޟqϼ\W|       @ Pb <\Ks5e/%[?H}&;      Y @> >J˕#y|:?އͧ!      &Pڎ́3E L9rL{c;VE@@@@@@R&@>e^zu{\3Sӧy!U.F l7P+E @@@@@H{O([ N7T6/f-S 0Lȏd*O*mn^s%:yo[ʶ~vx LiPůcr@ MzDȔ<@@@@@@]ʷBwwZ;멊cT%a]OUڥ(/" M4rR)%:~AUuuהP(\#0jF/~4@@@@@rYsy tEj{𝕶d[s*}nC^J!     i)@>-?Iv]]S=ѷoʥR}US^RQ|HO^7)S_+g)#o;FeU[;``aO^Vʺ/{~r>VvrDV&cm~j%E?9&8x-iK?GG-VnR9ʣJ@Iff[*)[)RY@@@@@@ pBCzn.(e{/ P򱲆͇|Bv0ܾ E=k_ne򊲪mv7 E͕da]o.kJ#ŏ3J6U=6M tmo6Do=9%|\ڛ1gJopSo|e[9P:h=axBC@@@@@c"!Z}8tচCg./{Q3P?>LWypm} yʳ//Q3;)/(w*;*^v2\9O)5QWx͋^$Xy\IGQ+Z…Y2 q}?*~ '5qʜ koo Q!   Y-`Q-/ jfurq $#@>%AVM{E]M{g.vi@ŋ^ _,U6TVJ/4]Cy@񂽷v%hmc Wߗ;(^xX5zޛ>(Cy%~߱Ҏ7@V)(-zc?iJ2,izzχA h   *p=퇿ڗvMy,\og::m?Х}?,i}e  d@i" N[Dð{ŋ//ZDyQً+^"0͋F~Q4YTR֘>UQ.POzhAk$[g74/?|YӋשjkGS]u zx~qd~CZ/-G+iևgQ)Z\6K[TfnA @@@H[gߛi]4;fuX{s };1qme}qbqG> G\=~ɟ:Xr{6zRV%; d6h #*a()*^d|{o{~M=yk[e셊GJ2V_r/6(~%V5hŋOL>|ܫ7SWQQ~Wi W ⟃7.}l*5?nnSRXm˔y @@@HK;^f1vڴE#yv}b=ڹP<ޚ4aĚ4iU?F ulʌe6mrܮުUԟnw.--8)@R)PSy@~)>CE/B?xE>/~?Q.P-z=TyZE"ŋ+BEIԾB/{!Pڽ|RZ+<۽A<^YװgtR'Z/Dk F)wE]۵Q+߇˱2ZJ6AZ@(t%,5k5 ^@@@@tB[̋uDݱ\[E#hjZ5i?G/>[qY5zoѓtUk0 /@>?c2O{{h =GDV|iP&;}xzژ1E@@@R)0v0ߴ!;m>|9fYn5M!F[l:N޿͜vgG]Zvy{̃!zvMD9ay'ߙKxoomdb@xr   @  M;w{{!;y'jć%6uWjiY/PCϤ.Zͮyl=^K"ݵup7ALm̿_\$ ;jϦ+vW{jp ?&@K}v}\    >Oh8gԻnef;Wtοv[t/fKfvhou:}\=Q/{'>ٴٖ&ۨ~ܲ%!ww9(ػFz޿sW9/Џf[ϢNf_7/nyٔT?ljhVf kķ첅1x@U}z>%,=\Lg@U|8<+ m;"ܘ\ 9N7vI&R^sTv<ԮN}@QfuhS{Eޗ7_|.     )0mU=C{>?aͮ6޳~c|X NoڭeĩzF*TQn a^ι`0^ҟ &=›W+ Twn2GO4t7߾>@@VQ*9   @}i\È{ExAU]̶;Zoc6m  uX@C<g=#տ}lWf'O5m[3gL;`vq51_?TO3U.@U}a f{L6ouIO@@2(ڗ  wMǟXew *. .'u1nB) ]9i$h,?rrrM6O9 @ k!WNXU O>(2-@ia^ގ_9P 9ySD23 z戮ZE@sv7o#9Bgͤh_RB|NWzg{\.Ds,  ~ P@@@J иk&;Z 9y|ҩ"\T+t< 97r`>Ue \JsE]wSӢC TϹ'WަLgHNޏq@@o~1   MiwȘGE2EZ:K=bB-'eoG~h%StOVeŁ֧ZfbP=綯HDvœܮ9*眝~.Ze= dZ^^ /*엯@ske?zϖ4 _VW)ןWEu=w3 @`(Ƒ    z{yOivk!)i^Oo"HukUDE 9ko!}PPPtU#]p{}#*> )@IDAT]FvU{W oKNTѢk;^YκkQVh{~h=9cyg+|5  eAkpea}[ 2eO{ P̔S:`6 zB`ɷ~+sΕ\(`     Pv22+mn|B#1倝{JRܨwJ:=~{zz{sN Rbp}Nj֯T>V9N6l͖Yݳq @HJ-yk͚5?3½ 6H^3zhYlUVI޽K"sxF:GLl޽-.l]rr=h     @XovߨJFE_WSs:{ύoR7vkvFwy"6Z=-ݹWb> @&X۟!C|o˚߄/^Z(@c;A'@by0;Agv/_\}zBq    Hb~q|H&ui=W1-jӻm*W'{r=jz:}sl]yQsnݫ\w@@@@@ YE2ʻzts\GϷh ~ڔZU'?_$#}."]Sݹs?Ж76}Ǩ"%ʣ7ՐM:?  p Po տ;m%%Z믙\ @@@@@|\sUujvsX>o=_<ȹvv=U~Z$WYY,mͼFuj,^j,v}yb| eUwfؑmŹA"r_Gl.Ҍ4и[#Isf{$     @nwZŔ8U*'ʗv)̏ @d {pm]C߂]~bMi f<[QؼmfG⃟Gj^E"      p$ P\MW7Gg;~O`Ѭ[e<6o4H0KS4/wo5E_0{<      \}`[b&ku~{o45ͮ[E3sdW2~(rl7[e+L^8M^|0      DEɫnWXC  qk /BK2O VdK2Oq>ЉN~®yFo#)q鹏      ah_zKt/Vcuxq6yXs&]6FmZ'[8FO$4igxGfkOiR!A@@@@@@h_nI_YzNwNcIv 4Ohl{tXMkDsD!Ő;tZs5VCw@@@@@@"O}_:{#Ev}{x54_~,ܟiX7?L[>/Vtq5|mv/N}@@@@@@E7YZT#|O3>w c%6MKs9D'A'jVtD-      DE+:Ufׯ@sf&#LSŔIQiPzPvNt}q&d@@@@@@EyzsVŜ6XbMC+tm,9hg!      @ P3umN{cU`{̦  fejDC3!       @(omolI:8M ? ƤC=vCr9c=դk*i       @D OγRsLDo7;}B+lfw١|>_i       @D [e44etv u e,AwMf      D^emU7(Ea_,R|D@@@@@@@ :>h4XV^^ZM!C|c.f2s       P{8us؟Ps=\'^,@@@@@@ Vdk@@@@@@@|@@@@@@@ (GG@@@@@@Eٳf@@@@@@@(hOv@@@@@@@ |gϚ@@@@@@@\}?}@@@@@@@ P=kF@@@@@@rQ`@@@@@@@'@>|@@@@@@@ (GG@@@@@@Eٳf@@@@@@@(hOv@@@@@@@ |gϚ@@@@@@@\}?}@@@@@@@ P=kF@@@@@@rQ`@@@@@@@'@>|@@@@@@@ (GG@@@@@@Eٳf@@@@@@@(hOv@@@@@@@ |gϚ@@@@@@@\}?}@@@@@@@ P=kF@@@@@@rQ`@@@@@@@'@>|@@@@@@@ (GG@@@@@@@|V=ZUYfw%f/      O#gϚ@@@@@@@\ hEe6uOB?Xf]e9s      ,++4M`w31˦3@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\1@     HHuxKS@,VO8_y;X= @(ڇU    DVRb @hXEѲ' w@@@@-4@"G@8q";@ hA[2 F@@@@@@@C QD@@@@ J81J;F(Y_ˆϪ@0 p}Y5       @t Pg@@@@@@@(@>@@@@@@@ (Gw       ahF|V       {@@@@@@@0 P#>F@@@@@@n=       @(ڇU#      DE@@@@@@@ ÈϪ@@@@@@@[}t?{       Fag       -@>G@@@@@@E0j@@@@@@@h#      Q >f   !裏yKS md^^ZM!C|U{I!U }y}b[D%R^'?_"H ׶'ɡx/ԉ }y}b[D%R^'?_>HB@@@"(ػXK fQ$RG'/ImHy'w=#bxDLWl}R@1It׆DJvD)}bCz>q?;#ֳO(GN%   GTnZ7+R,"e;BYa$xD6',##R#,H HK'H((""R#P #"RH؆d)qHَden b4       f   PXۧCUx,#_蓈xi/:Ox}D u}F(7CkÃJ4ߑ:Ӡ#ubϥHD׊)DV׉iEk GG놭A@@@@@@"Q*  DRXyi֓A߸21>}mu$>9{mG@@B#o{]njMW?=[V"k}]G k ^:h E@@@-'yyy$mXZY_}R{C@@!w.+^D6+\XD*㕚I>0,@@@@@@@(M    @*>#   ߎs_-c+U./tNQ?\ٺ `a[kӝ&]Xl_ ,zu;~W'a|`-&}U|s2 yNsc3CW7L;_+d܉ 拺;q134v4lWk*klK4죑j ?bY/Tդh:hFh \ <'↢})@@8p<ˣR)8@}J͕bky/ͳ1OI+#G~Ks喣sǟ_0/V~%WN<柒_&y4R[VjL[-ڮ`hO_@@@9WIҾ][q~ݲoGvNdH[H+<bS'XiN|q}"OLt6v}Zo˵'doKSƑdv@@@`SW'y9z}|.)>Img_n`oPّ͙$'/[{AakK+=z~_c%EwSw&Ox   N`;$O^:'Jݚ^V`oZYmCE6Scx_?_ F6L)I=z~-2?dՄ:.V`L?"+ܮ$Mΐ?v`otո>bIO>`mNoGϏܨq5VBhT  z]UT,l1K_<ԨRSIAFR.6A֤rףOip^SZUCk(CDidz   {ֹ.n^U [J=*k|rrǟ{k9x/b$-#v-{kgn#unɗ.j"__H2w&\x:G,">bE.ĺruh34j.\qϯl,@@@ %Uw lXQ$ϙjR5v[VܷCr]Z*-e̥~t&7+pk5cߡO"s0ͻ_+ѱeI3}y}b[DDfU F w]ȼc2{>k9fyMoGfI^Zj9Rdž7bqm1\]"E#&zZ$F/|Q*D@@LCͣ1|T--Yp8|G{K\k/ٲH\hTUkvҲj[Y7٢zmIDٙC[;-{fd#Zʻ ^,0lMdcz0F9id[2jk0HD* - wرL|y; #Zmz˲yn"!q2GB:;;9]cZXrʬGVGMf3Unbt kΉnWk[4(Γ>\p8;9E{>;ޚgۢIY4IZN.gm'mRd>Fk[ukW!q8}>YwT\i |.   (';M6Lb]ze9~i ϒ#IrnӋdަYb޿5*xΔ&d)mv{kDvgeȵnrG?I=i¯;>KJM²vc;aنHYidGvђ?]haTI2R0#E.xA϶4Kwl&gEwd??kkJjR)GTEAtp"<7¬s*Zm]y4YMj[QVi~sNgU1ZX^ѵQŝúZ'VڧV%on8-UAien_XJR{&OFVY2g*) QZ^\#\r2sRd:?CWd)+;k[[idgH}<\9QwT͛yOP3[3D@sF9߮_iY9CcvӢ/   I S;*oHS_-qz YQ߳8d wh2r[΂Wu89ީYIb|3ը''wL08kB eNhotٹYrykG!߮9OzdjfNtw.[n珕_lYMΗz'KF}?ԨO"/-;mO NqNP^Xs֝Ȕ+{[5c2wޞ!{T WjGM>m5/kJYT%J k^`rKVV֑qϓ?LѾ]CԀ._9˜>l_^N||rKk[eWsݺw{;62eDص5kRYN{ώv_2/ b+HvymbʢIW_c?߇i:i^ $i\mpnAX}ƾe'Z h\:|ɿc]c) E|Wn@@@@ գugoE9IybSʥzTKʒvл=y+bq7n9ճޗջVJ $Yl'if-,_[*'~R.[Yn>.}sb;A3txBr77IդrfܯC%>6^>d4[OLוOJB\ +vh#uyT^oݾp} 2dOnyDٽH?]ݬٙ'/K.) &|OLn]8z48}|y_w+6CR&8h,q OϬez$㎥{$N4HŠnT;*,$Mπ;s5Oh:Ǝn,M]yk\o"3O4VZ]X$n+q7ܺBZپd&ٓժw vv~kZ@^+MuIL9yf{[J\OOvt*6N79O2b)6ަ릗9ݖ2SKp #{7e~}rIL[.VΟ}7:o;ֿh=bh=ZcMeMQ?RK[wFjѕzX\E:~f#rz&@Hb@@@ rr#CoِNO)as_εZZTY񲩞޷`^N~JlmQWxn54\}gڟ'9o&>)M8&bS}ăz-NY=Yn2BiS8SdYZnxN{A~uUR%v@IM"7}K͕L9`A< !lv\ 9>gh[qSZ$ɵ\ýc,pTl[Tvh}9DЍ>rٟ54+Y9Ο3m$g+M7+/2%-V \nҾ_vSHƚ}:Rm:L3Z&^)qɱn@]IUN>;F7ӭhKPN3Sc՚56%5wSX1Ŏw»4Et;=^y `%]|_ׂ-=TFW3J=ZnVJw:.^kn~^gbQ8Q'4jN^+#N{N[Yt=2C沈{~#~yߨѥhQf_ <{9-7uWxnzj9ƟvNlOjXiO?4+h5Ŭdhm5-¯hrA̎   -P^!pt|-\9 x˪m32ՎѽUc=hJ.? fVi/֤kiV%NI[IIB6_6zɈ>$7usMfʭH۫uoy% .}H%]=G*&K+޿ #ɂs%{QωΥ'h;&_/r/6ܷVC3M&SQ@SY'+(>҆F}U }rj^o ]?dHsK>;_sF%m'5+ \[S}bVm^+kK]g:{ ,ŻEB.[F\ZfZhgښr-ǟ)_ђPѸ ]\8J,yk?+UΙ^bt^bk&9-ڎ-Tc)4dj~ֆj+2Odi۬X-[iOYۻO*lmsZ9Ȋ34V-4qw׬<X]jw}\bo4wS>u׊Գ],zyifسi7:^/?lw5+5oxV/ ҼhI7Z*1ZpΓ6)Jec==^`Ӛ5kY|m0MMcRhy7o&GN   @ \ze2WJw$nE{;U}h΍;46lzY,3;S~]?C?V'-G7OasON&րcw^49X>wt@ n˹@=Mc4ZD@[8deda3v"]-')!Xc74?iJsQg+>{Ws&Uw3_cKZ<Uc=p˭G#2'#IUtą%8vpvfl9ƟvNȪ[%ev9{o%=&wNWl sUsS%w%m)3{ PcŖ4imvW4&hiOf}:pfܢW{O@M[c]q 䓥. o5f& -ԯW/ɚ[Rrܐٗ={}]>fq7{V=w}:l]sM&0-+ޠ9Gsv@jIK_wIVz),-v2jWrƞ)S+WVȷ#5ijJm35&W=@@@(P!9Յri2` !E:"h >Qo;jckǕ IM"mƆ8ZkSbX|:FMoaέ&z907;:e&WSԧ{V)Q1kV4 h edd:K'[ڠ[ >`ߧ9\ǟa2zӬ>٬05y}퓔i'\nB/}ޮɣ QWM$S =V͉Ϟ%&(J߿qM\3A,1Mc>۷'7%nu hDf!Z[WWۨ{v^SW=ivihNKV{5[}w2ܿo=4j򝨤CZipNUtz|yoۋ5u68)oNAYeH{ L ky'47{ڻoq?t0Z[ qR Hk n|sZf?_ΏkLuĔl1s22~+#n   PMMu}w}XVMSZ'qQoG<ͫU :5vz-Fbbez멱#>i,l3 ϐ=XE\Z Jr8>|sCmuyc~?wnA+wOi<Z}{rl>[V:~%ێ TS"dK+| x U\~Fz){zZ߶Q2{/QQ}XkW[-r8-lwS/iB4d4j=!t ϣ⶷uB{y6+skzVX%۫wlxߓ/kD)?7NXm}WcO{>Oom2t-~>^ʲOI^oL Ϋ*;EMNxZU]Nob/xaPŽ$Wf_[1g}eW5c4kxXa r_龜_=5YܫyESNxoJ@IDATbm~D 0@@@4Vڼ{Si'ik#5atCIy>DxB\scM[RD*':i49Eze4sd9ҡΦ^.|Cac~hjij~XNk[׉4>0ǪkӬO>[2)S,_|0e.>ȯ̲4͎PeXfkl54Wkjr5AkeOIM͙Q~OAZYiњrڂ/;m';V3;Ftx8QfgHVɒ:޴,E67]RV4_2߁< k~ikZfj?{t^+ߠ|{ NK6OV=4s4m4{=Sfo n^W 4hgeYϞ@u!E5;DPZ0_+[dmngbDc|ERsMMr:rYƞ]jFhߺ(C>9Y__6V5Wikh'ڧ)Dgs}.`˙qg"F@@3;_m[koTaA;G)p9{;N>"P&YOۢG h~O6f'f<$s6Δ,TX 䵺n o.JڞMڜg rf'';]6L㓥[sd/E滷>(~;?-~?mNDc1P;5!oꓑޒrbgPP MҹvQ؇V**IEg^ y+#}bn3s^yXcEzއռ K V\F}/E.4ݮԭX_^Fj{}X5WK}j;ש_ZS78ؕVw _+ɱdÏ;F5_4}F`OMo=@cеn\Y.Vy^9Zc)geTf4F#uOOΏ4ٟ=ؑX-z$5U){/bb`+lwixp[qCnMc"ۻ8iDz[TY{uo}wט`ݱ/꽓`#h kB$9{\;>o/Xb[|)&h;Slgf_ k#ZZw"I~'^1f,}{TP>ԛRWh~]|#^ߠK`gkҌͿcߨ PrX     lGR_ZѬϣ?gq|_b}w(kwu-l珕_YMף{'KF}'ZUk+iwy4*z7mqvj׼=mzjRy8ΔkMdbkuxoEt gX?ê74OS#r/`]{P֥3TKu-xzVnf$cuu˕wW](=h/ٙbT()`~^3XjT=NlXݎ?XXhk^Tbѱ5Kmϖ %.!XUR}UVOd欰n+XVRc&E{~' @@s3b깒g<,>M_~Z\< |}U[ϙdLwiF%9aDkytbEww͑~~TNV꼚 }Q&5Ft _hg?/7uD٘H?.UylϮf}dͮAL+`9gR|4Mm2s<Og[%'% |{oPԩPgG=gOԖ)~=g u߂bNK͡ 3`V 3'S/iZ)\ϟ2~xBߜeN/Z(ë̐v.?`=vȘsݔfʯ}L?P(SpY9SslYT~⺰:hM!ߩ$lvŮ ϑn`n@@@ L-+6/z-ɔV696Moכ_OޫvpNmu׷7˧z$v:'˹M:װK;/hlܽ-#ݛo9]>ra˽*,pAF'   pl_Gr ,K$vuk晻dB#^GvW~{iȔ>jȌ6H.݀qNYZٷ5[N֢ouCMw[0B+X!v5Ŗ5VTIRڝa+ՅiUSGx؉.X?)ڻE@@(Ы?}NÇ(ĊfE^{ˋߖD`)R#|wXə_˳3ɂs#?[2J7!wL^F/^,>9awܲ'M~Ob(x=ȝ$'!:ϴO"<>Ԉa%r-A@`ښr-$.1VƟ@kK~=Z*ʭsCW˖P%^Aꟕ*Lh/q1bǏ;~5jBSO4 uk5oi5ijifi\ a\MMk'<1<5vgkw,f߯Yy=20R@@@b $%ۿ!v]O+Xuomz~C;2Wtm]u1S7p?ʀ7oK',(U\HO"7\BD^//l @ %8v+gQf쒆U?**"&nJM$/GzN=&¿Bnv V\G(How鯉X|٤qu4h&jzjj(E!  hXTILfJ ?yM7v7[RcOf7-0쓬,:A6 c/V6a)   $IsR+z/9]ٞ%FVY5B @])(x+[|Fᮊ~RSO짱hؑh\@`{4,;C;np/BC@@#Tjr-Y3|}[NwwTL,_/EvdnwvngEhUR\6Ns@@@@@b5Wޖ\Utoӯ#}[BxY1v3z;ZW/#uN, )ʔ`G+E,?~4}hntҌд;v4q۬\=fH?s%:l?v2M%I2a}ivnmSllMĞ24/jR4lta.  PrbADom"Neo9`P4=IV>_a&>l(`?s(X   x8QfgHVɒ:޴,E67]Rۺx+ƹ-/oO-D:WcU:[is4MM5wuUIwOztchA* N`ÞsƎ:9Cccw6~{ϭi75Wj>Hxz;OXci~ۦC њs\sN @@@ w-{һ}_I۳I^t+ș{H\lTt0IOn"lyBѳ`EoI|l93(|=/6K]q@@@ :@~o|ҺOm: [+*Is]z  M6SNxĖeE9n\Ycg54V7Κ5);>q'S;8!j;]~Iw޾;`'X9Uc^9ho{~cZ0t#|6hYi5{Zᛵao+k 5?54@@@knӬP;k{zz>uR-r\s Qʥgk ~.g01DBB3G 4K4eDԉtۿ׻{{G}~]}JZW/tYZogWb]۫jenU}(mOjfEeEWQw&NO>=+}I]-zO{mzɏq29wԤLnnIڳ uIuYx%uON6}RjLnCM :5I ܏XV/6gdOʿZ]zAIյy]ԛ2=+F9F @ @ @#Oؐl|EVAI=3ICTVoA|,ȷ9G Ki4vno,k,$mR6wHTkֹlLMڭ.Pc9k ZMsݿ<GFePwۭވ.ײϵV\/, @d y'fzֳnuܑ[NΘV5Y 9Idᑮ]o5<v/j$pK9oN&ԈYESZU0}:yaRK}=Vwh=٦ *J/UA.od93x.6N^Tڏ7:/mL׺՛ޜTQdq"ؚ}4Z/5fd1FW&  @Xq@5˦ @ @E^ h6*_T;5>ɽ*7[=B.WtcUn=IaϤ ;&$Vod;bU7AT{¦}O<12guվ{v.1ߏI]-&@ 0 F7IqNvi=M$ai{R<&eIq 8ڽ&bR1Z{IsR,&eIq /9='c0Zե6)2ifIqFui-6[#dnė$U8/UQtg,PQ>~9[?GW'$%Lڭ==߲&wfmIIHǙnIո~L6 @N ?QS7 b]ݕkr%ݤ\dIut]&ωkzr%L21qML11\ dRK[WfRI utQ\7 rM\wWv8Z/uNg5YKR'$OM/~jه'+~7OkYF+.wus{^qxߓT$J]rIݍaImSSҫUe[Ò}IjkjLPOF87;|!eH՛2MͶGfXu=-fi+_  @ @S(_73` 0ͿjzO  @`D7M?BwBur{A[QmjHH|q+y(9)-*?-;W݊jumu 0P㫾+&/K>ؚ}ېcZwBmNry~QRCmMov[h_o6ɦ|%@ @L@~ah?͐  0K.\ 0;Xh4( @ @ @ 0xU @ @ @LT\&$@ @ @ @YPū @ @ @ @`*2$ @ @ @̢,^UD @ @ @S!h?  @ @ @ @`g:' @ @ @ ELI @ @ @(h?W9 @ @ @ @T(Oe2H @ @ @EEYΉ @ @ @B@~*.A @ @ @ @, œrN @ @*0?74 @W; @ @ @)hߓ  @ @U&a%@d_#@GF# @ @)X+LE4|̈@}T/i @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @m5 SCw͚5G'ۯ ɺן9.ܾ9ɪs5oKXL6&z?$1'?JiƯHeGWA:N{]뒓 j֮3u @ @씝hea:>U<0B]g]kVA>mIh8ȾmX>mJǶ mRox~6,;a&;p˾FK'OeG Oͫ]o@wȺ~`fNCN*ԛif١Ƕs @VN @ з@1ؤ {$keJ?Ii*m)hԂ}odТ}Gժ>hѾ>.aTa^kwbQ࠭~fh @C h @ @ur=zdt1pz=6ح.tᅿ^x@kicuV}׊!f}>6>vmtzZK9wC׹K} i.%@ @%P:yrɎr rq>ugzМ}'e&xŎY^gS4}\;4;صɠ줟o)G;߭{ygw1;XdV3[ \mdV^ONҭ߅5 =  @3!h?I @ @"pVz=$94#yr 5I8׫lw ;K6K8NmLИ_ɺn5Wb+q1Jsq @iPg @ @ɭ<+N.+"b}bhJG4t @   @ @ \uG[$/I{l @ @@E.( @ @%![>wrd뤳Ty @o @ @Q|9ԣo<&HR3^͝|%@ @|H, @ @XVn׿I~h @ E@Ѿ j^t_ͧ  @ @ "ݽq] @%u=Wl.袹wc;\0w @ @ @ @`%it k֬~)Cw3~pK/mgtw}O>o>~w6,sd{ @ @ @ @`\|#WO;e͝zWh>7[Zk߹5GY3ڞ @ sk׽o>i^I`v5e即;KOX6ٯtt߲cY֛Q" @ p @,oͿǗE`'ޘ|3yHH^6./l̏kgojI=5 @ @^%c9 @ azZ-H6MI=.pIIU11M6v9aYFɻS;%&7M4 @ @ @LϴKwM} :Fߙ?c8gKOKLXi/h5?XfF @ oZ3mc6^ @$pi`ƺl͢ԁB ޝm,Ica`=0l9&\83Yջ61I @ @ @PK21+0{>}f lx; @`y|ZgG!@ @ @Co%N6_zLS:]8^?G76!l0( XfT?4 @ @`<6C[9z3G}}?SӒq*ҿ%כ  M ,wn>v @ @ @X_]n꺦zg;d5̞&`ug8S}k @ @ @X ga+^$L1ݓ{s0v!@ @ @ @=-t%_#7=xAemN @ @ @퇷=)t.J~ sG$%I?ۯ @ @ @ @`"')!ɥ8*TvlbYO @ @ @P_ >M3/2lݒ-8W,#""o6V @ @ @ @`EW}]U,2$X;Ev @ @ @P4ګ}yNФ>Stf,OLNz=9Mi  @ @ @Q@~5^Ϲ[7٥>C~}R)?Ϻ 9u" @ @ @,zO: 81-bX;ϩcYc8=_:ezJDmZ~lsaczLZˮ$۷}9,;V,xc3|cvns\9q;3O @ @" @VD`֍#K3X6?L 3Ԃf;8-[oܡcz2Ěd廷ܱ=L v5VxzZ\Nڸ11ֶynݶi.}L @ @ @&J (3QM vqFzPnM.ɭC6dMz͓v}OozOfmz JsWxrn#>- @ @ @z (Yk'{%ocsK~(A[SsY?c%$I[JII^<0"'ɝI#/_1K}ŧ%H$LFվ,4A< Ƥހsfr@Rmԣ @ @ @l鐓+PwW;āG9b=@`4W,ӓ%UЯbu*9}#!ZdnIGI:N;|CrdR˟/U7T0qV웼'yeRo:8&2; @ @ 2 }ʌhCw͚5G'O)oH֭_q̎{m*pJz?(6egR_%g$u'IT}&OO<:i3IIګ%&ӾڡaI,.+{ݥk ]hVٺ9=ߞׯ$L^t;, @ @ @#hBjV zAn>NNfE*oHj*WQ_w&iz,*ӶFu%uGzӚxD^*$cC?3yLrlrBROI @ @ @fF@~.,aߤ[Yc4iӳv>h_u}}E^aR}ٙu{Mݭ_+yJ[]w읭_VO82yRwe&&& @ @ @`:@Uݤu璋j&$$JVˎlROG&$\#@ @ @L\C߄a~x [2.)pr:ZRw4i*)yNed)mcv|/O?9.4v CK\) @ @ @3!h?I 0g$syaWY~~{bUdBGܿ-"wݹ Qncueŋ>k,o>"9+5)6q7+ @ @ @`'3+.8*|-Ym~?[}J^<"SPk۹M]1t_G=jGԛv($帮f=}%@ @ @L^C"@`&~Z>cn]\ulW׵C^3ٵ^l7fϓyDNfOI$'v @ @ @&N@~. @`猴ހPF @ @9f @ @ @E)PI @ @ @'h?{ @ @ @ @(OɅ2L @ @ @=EٻΈ @ @ @D@~J.a @ @ @ @ (5uF @ @ @ 0%Sr  @ @ @fO@~3"@ @ @ @)X;%4LLܾ;H#@`5 lI[37wjO^qfzЃv۱e:ou4ױavϝ|sg}nw>smrI @ @ @F`4ьeU/}Źs9s[owXsUqƹ:h$vvR:壞oF738cSOb{-rf0' @ @ @X34;vܱl].0{iX?ou,7;M6ѐ ̊Ir%,>*[ @VWwLEƴz{zYU5cvޕCqzi_u,7K @ @ @*P_ދ]Iz3^r/!PUߟ؁c @ @ @jP(\uu|߲ى6?i홼(NJeE @ @ @̰@gxOuEO69Gw̿0yBra:'pqNӓnd+] M @ @ @ 0kg&Q\c4_'X>i>vm~e+i3`]n;e ӒzF @ @ @ (ڏIM73%L?0ÓY+mVO^{R{nĿ&_Oc @ @ @Q^;lnIgYvdTt5䢚>jl` >ܰ˺kf  @ @ @Eաv]]ԏ?/yrǎuKݘ>k܎OY[ ͓vE&>Kk/l>|>oOZ @8g_i @ @`Yi?y/CZ#OS8 ~D}Vs4^+["*>i~V}KC#@ @ @ 0E|_g66/K{rHRf;63Om.e ڶYm JS>!`rͤhx%@ @6,- @CU*hߕ:ugy}~_vtsI柕<%MG:{ Nݦ8 R .K=2fke , @ _`,#@ 0^+(z'@`^ϵ'gW>YhW L2%|6He|}qeG4UV!@ @ @S(p~X]9bm+&ݬTu'Ne2z;  @ @ 0}K:MPT5ZWhW LYcz]d;}qTǮ+gRR#@ @ @ix\,.6l 8-0Z]]`ͶpMkh$xEev3iq#[ @ @ @tlaZ(ג-WtӮ.0fk3S5gWW?MK! h?X} 0ICى @ @&Phm[ JϤD @ @&s󧤟hm.ȾXsZ4dnc,,޳Z3*\ EQa% dB @ @ @|"c(ܦM3%t-z L53܇%L~t^濜}6K1 "(ڏW &g`^&@ @ @xLX!ULjW'ZlVoO]V@.}uM`~& ek @ @,ke8?N*V-IQ'=#kKMk LY=9I:w{7\?$tE1L @ @ 09/P=(Z*NӣƷxJMk LֽٚW%?H}ژ[3i(6; @ @ @&cS.xW&J/h L]s|;i?_'ƀK3ih:; @ @ @ g_E*vVsV>9ֿ,~irFrZRwk#P 1XIK3h \И~D/OX\΁vNH?u @Xq2ۯ(3mE_`Vn`~!gr(;*SGfa]9 je>b? @ @֎C9d5kl?V ɺן4q0\9= @ @ @ /^u>Wx ʌr\ @^''O>TN5hZm%%H>8oR},yPkzVnI݉D̲%&?M^x2_4 @HO!mo^88F=S88F==-S  @(e4G$_Nޟ49 9*yhR%7Iٵ1??# W%KvN.jMWIo+7 <%_rdCI i/Lj @VP`$EC @ @%pt\ww*7c2d} ~_վOL|5$$I+Vӓ$*'"~_#@ @K?+:`f3c6ؐg#c1-P?4 @`x*Ͳ7?*ӗ&$wL{&,Z-5ѣ~]v]&&&L˾ @+(0 C @`%G' +_'^Y]vd5uW}= Td>գY2+qQvVܺsy @X^Ev4fP`I9/f  @ lu[$OK>g ..ѳUɾ]/[˯e , @YR#@ @N`VOݪϑ?.}eפ.lM^zL6_ώ&*+?Jޛ:9'ylROh[e @ @ (گpx @ @`}''&9['J|/ytaɇA['%oH.N><9yL/ݒ%^ԶM|<{ h @XisW  @ @I;{+!1W1ZI篑>zkv_V˛ˎzCO&$NO.&g$ޒ(sq}Lj @ >~% @t]`4ֿ:oMnz<jU@SssM*,mR^oZok~ؗͳc^G֦QyS[Jj .Y;'J.WII>o%%[''?Jq\#Z}I^E*)7ԣ<+996ÒZv0 j[us۱l~5~^Rc|B2lkV'&&I^ԛI* @ @ 0`^&@umgRw*9#y`R ]ewI>lTَK[ ^ק_ן%uK$%'U4՛}Ԥ U"{%L<'P,gQ_m} >NHf_#IYG&ԹuyrϤ}Vi @ @ @  @`@SAI7$uGw=Zk]Ž{2|&iz,?,Eq>vNkyZ{"u74\k [&uG'Lԛ m/krlܿG[9l/뷒M,!;Wc&7I4 @ @ @HGʩ3\Mժˤ> II]⺛į3}mnz| Rt}]ޚYzAkY\̀7om_ʂ[޲ZwwkuK-WNL]rYɁIM @ @ @ (ogO#lݤHj&${%oO?HkڏNoo,fNع;ױ]]Uޠ1>GeY1j m2?I#cZ @ @ @F"Pwdj 0^}MGɤݪ9I?,U1TgRz@FǓ'hU`7T|Fdߓmړ&vn/-ZDs&ZV/Jʮݾ֚}^W$'&'drvYwURPRNF @ @;GF#8%kfѾvBI=~UѿL mI'TdT蟒:;]zA=Igk?6:5Wtl|"ySRÓ̾2\TWE$u&]Vyu 'u*|:~_  @ @ @(eS ).gtQEC|׷"ockRE*o4yD3I/KǮ}=IڙɳodQhI׫[2u|Rc/ZnIZu_o4l3Z kzI[Rq]ͤuߴW @ @ @`ٔC 4oݕmn]ڛ^ulW׵C^ݾ+<]kn}*X7}iMKvLN_$Ͷys&_Rw<9;i?Weꟓl΂%u~:Z= ` @ @ @Z7̒rrc9rrr ~Y\ @ @ @&0Y)y$}Sf6Wmlp3{ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @fLT}7IENDB`concread-0.4.6/static/cow_arc_8.png000064400000000000000000003100651046102023000152570ustar 00000000000000PNG  IHDR uIsRGBeXIfMM*V^(ifHH 7n pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATxde63$1 tUw5Ą㚅auMQWL(*fQq1bQ,Irfy3EuzNUx994 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2'%%& @ @ @CeN7weEN>lm q͇u{ǽGKx6O%W&+[VLg"]?2|KZF @ @ @)(ufY~\̤~p(tkIgY>$v3 kko'' <2ybddy2]NO@}: @ @ @ @%,/k{>,u{c_ul1^}kl~-(xwո_㘷 {_}oG&M4 @ @ @,HS{2''hzD;'VwS*oHunI*MH՝Եhe䟒z |S$ak,$?LCrRrEr} I?N7'5ߑZ=Ie\N8?Hj>գohNuXuQrLRch @ @ @ *$1|l쫨|m<9/)Ҥ ?K~ vT*j-91cj[j:v;k_=)I=B?Tq_KH2kI*tWZs/HnJL=*Y79*~]o;5 yO8>ݘXߟTdyRI2z7{Czz_LL:_ @ @ @ 0X_E*BwRzy}T;4Cke#uw[׫P[wro<^/H.^c^]VEݣ{:UGwiYzN I]*W *V2b~'uN OVURuN{t꼑: cg?MuG}ͫ^G͹Sil5Dm{p @ @ @ @`A 3lG똵S;$'Um]:]^$U X[mǡfk祾0]nuֆWϚS6)$7%'ֹΆkͭmڵZ߭k<#Y6QRw[}0Pj9?Aë߹eꥃk @ @ @ *wTQ YkIkOH6H 97xRjw C[, vIkS1xNޚԣ᫠__`8j U/uH1tԐU޷&;eC5v^6By:گ9B}:_zy!@ @ @ @BdZ=޾Sld;ȫ0_ί,Ox]ݒ*~"9&B}]IU՞:ϧ^vQ×s^;׃/'5}ɫ2!Wi'w6v>:~j ˲aqc4P_D/=|Nl^l,ח'4v>*k @ @ @ *Lף{gc`Ȼ}rM<9(J6KnN.%_I*n7;I]^ۖ&cr@[Ev"+kd e:ndƸ$1!11VZN_$錹M^E$ZdkvqUG?")<=;㫍gqWh_uOjq>Ǎ^Y.Jic- @ @ @ @`A.X)V>o*V~0WdI^<~r^UXxWǿkxyI^j:]Vk~Zt̶cm_%$ԾYI}YZɥIS%HmvVqc98qG$W&&{'Ty>:Ӟ:ٝ =^ˡ٥> @ @ @ @`sL}ݫkU|y;ۏk:dW%z\ƹk]kߦ]뵫-kx[t{u8n9qדi @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0` 0`v-Ztd oYr9!K.=o1 @ @ @"A[`kńwu @ @ @C@sυtG}ZnX&@ @ @ @`fgU @ @ @ @juV/Y 0=a߼dsf@ @ @ @pg{  @ @ @満b\ @ @ @最b{  @ @ @満b\ @ @ @最b{  @ @ @:s}O @[`{^<(omLx^rNrs @ @pg%D @]`$&'$$,2 _8~7L4 @ 0  @ @@6Ouwz/"?c߫!Im Ik3I=z~GѤmO[I,lv koH~Ը\NJHO~ @,dU seE Jv{\V1ጼ.M,OT{*U+kL}hr$ժhju*v?9;X%XG:Y0/ TѾڟ'U/1)6qT%yXݤZ} Ӳ.˓³jKS#z/4 @X ` @ YN`oR&UHdyRk.~x֡;'%'*u &U\didɛ*t Y\$_c*h1 *=98VB~̤ޓ*ziB}\ iDk}=dFz@vi @ "l @ @`! TxyK^7MޔTj?(Vvyx7%M^TZ=eϺV힅z}nz"N~m, |nkk[?eCߝTɓ$5CӪ_sgk]V6SV{˭/<(ߊ @,,t͖ @,8geƗ$YYRw߳kHu~I]krUPɝ{~vARmˡ5j[TNHsߑ(ykR_Nxrd9Im>Ϻ䎫^BN!1Y5短: @ o @ ~=5֏ ײG*o,9 Vwח ubhu[ש\= @ @Q@dɒm-YV7i2EY"@ @wKLML;`ӶA&˓*PTbv{W_^^krQrU2ѶSN1sRww.ΚƭG2Vw%d< @pwdx @ @`@i诊BxgH Wu|eV_.X޵ʬWmuc @XC @ @ @ @3(X?.E @ @ @J@ @ @ @̰b  @ @ @P @ @ @ @3,X?.G @ @ @} @ @ @ @ (0 @ @ @ @z @ @ @ 03 r @ @ @ @`X숱 @ @ @& `9 @ @ @L;Cqrs-ZTu'7˗/or4{T[n\~>kJ{29 @ @ @P8>EC2#R?So۫N?f-hveiwӜzꩫ[lY=r-'=>H @ @ @fS`\de+O}w]5 @ @ @ `> f}ixR5zk @ @ @ dB}-?O#?Zgf}>]K @ @ @ @` ͨ~?Z}] ]{k[ @ @ @ 0krWg_P"kVN#tm&@ @ @ @-mFs֯N:ӳ*٥> @ @ @ @@ |&ߒ&Gix{XQַLf=>\:Mn>1۽ @ @ @ @9#𻌴SSCg&3پu^_\ҵ}I5 @ @ @ 0g6HoJEZܵ~r-Jf>.afrPE @ @ @*O:پ"5 Mwc|'ٷ @ @ @ @<'WG{q ZrA2ڸ: @ @ @sv=k=.fS'-5 @ @ @ @`0~aY>/> dG @ @ @Z>ҁ ]>+y97}[>'~}5sB @ @ @f\b,:l5>oǤ{ @ @ @ 0-^Sc/sWv @ @ @=)5; \k!g1$ @ @ @7sXEsu~x[2X># @ @ @^c~P-{1 @ @ @ @ lT6o14  @ @ @xXzgޫ̅C3_m; @ @ @HdTogV9[t~=z\F @ @ @RsU֬5#yPͮ$yfd:ff @ @ @S@~P#@ @LD{x~H?y6wʕ+IYt 5zْgmfgj @/1=6C @ @`g}.Y_ƨz.>k) |5 @fA@~] @UX4zS0s뙳v% @L(τk @ @ @ @߬oaX$@ @K`k΂eGUwɃ9!piO4H @ ~f @ @ @ @SP  @ @ @ @'n  @ @ @ 0%)9 @ @ @L\@~f @ @ @ @SP  @ @ @ @'n  @ @ @ 0%)9 @ @ @L\@~f @ @ @ @SP  @ @ @ @'n  @ @ @ 0%)9 @ @ @L\@~f @ @ @ @SP  @ @ @ @'n  @ @ @ 0%)9 @ @ @L\@~f @ @ @ @SP  @ @ @ @'n  @ @ @ 0%)9 @ @ @L\@~f @ @ @ @SXgJg;KcjN @ @ @柀;=曛e˖5˗//i*av7 /pq$ @ @ @B6\ExW6{lsgݾiGNK:>:fgw_0[bE}luAzP7yn=? @ @ @̼;g|+|H#\]_xDwj/O"zg㦺Qךj?vj/vX;Յ:Nj>O]{9ls'@ @ @ @` ~åK,W\%K<{$Wvλ>_cvBƽS-2-gak>M @ @ @ 0OY?ozƑévVV`y^\Y<<9k6)t}9{ @ @`iMsI_7CqAÙ7'4Q?Oo~wue @P9둮}v0yvޖmH.Ϧ/PE{%'wMev> @ @`nyO7ʹ%^Ѽ]7~CF9ds @ ~_$:媬םկIu"Pwg *1[g @ @ @sX@~޼?H²W ]ۭ_2&t뻦Ьw<}á}eWe?3iw_pۦyc?M?9#ლ6́5TrKsŠϱg ԝ3K.y{o~ڲ,_~Պk?5Y7Nοӿy˛}QsCU&wQ}NuͿOSh|/3* @}P 먝K>u7~]'TwwzͻG{3fͅuݲ^ |Rv @ @`+nllߞc{mmWc^4zq75^4[Φy#o~4oܷirj<({>i~֨i)oަivsޗ"O?4w_{Xl1|}Oti} \|MorBNqu7l곚cwEs7h|Koj^Oճ}'.l~|,nr濾ri7nۯl/ n7W_ k9a]9˚gcV_ @ k})|ʛÓ;qij{l[,%z[e3n/hX$@ @)MRh?i}iYo Nwh_M-j%4w:}wS2g6}fEo\N4^S޺ܥ&w.E/n՝>}/.͡cVySxK_w/oN߫cgD @?;j; \%oHP|59&:w!A1 I>Mȿp4N @*WB}?4tGP_i~פWqS}wǦ=QnjmM?Ȧ:wRWZٯ;=aU)hEb<7x~77u}siV_̞b~P_[:h;빏)v]9 @ķ7gߒ޲B="{IFcI[c5&s$.3Lf9g,<{ya亮dZy\Rwh @ @`tuKZf[ֻo3odʦvkSk~hxo~<{yoЯi ۿa7z=o)[]lۡm[/MCmníϳe^ſ5Qs;=pEk5sf76֋nM{`+ @> (O6?ּm{Mc>9t\ϥj 1u60f $uIozVˬSz @zOG^4_=(8iǛf;Vc|!E~-ݾiL1WJb!qSCY7}DwnٱMsJ`ݶ[杚W} G^j>my֣XULm6˛OY+e/خYݡCIKpègB Wn/m#7W @,~4w=@fq9z~8ޟ_4Fws,݃n6eo~9qklz\׶-M|zm)o#erړF{N^5&3hϩm>? @ @ @`X?3  @ @ @ 0x}.Md:Ffgl @ @ @ 0ysW$@ @ @ @.X?O @ @ @3̣@IDAT/X?H @ @ @ \@~L @ @ @f^@~] @ @ @b> @ @ @̼b̛" @ @ @,p0} @ @ @y7wE @ @ @X ` @ @sV`׌?>7{ @X 7o @ @*8g䅿oc9馛| @XgAڤ  @ @3,d3|EfNOKlkmf6k>]  @PwcX @ @~TAoG}s{Q~/YP_׸]ﺪP_ VX1= @@w@ @ 0U+WhѢt_/~M=2}7o.㎛Jw~n^O{MsylrrҫPE6{Լ ~ᴋ/xUf|W7_naY @` ~M @U`ҥ';"9fS~-yMg}_3:YdYdg<{;ߐoӯcn7eۑɒ䢤Z i3k"^zi󖷼=Y|yk @K% @ @ ;C[]I\h ;k\_]?xE%IBW Uj/p^`W/}Y%@ 0%)9 @ 0gO~A׶A^B鵭n7➙w&pj?8yLD@e.'붶Y$@ 0%)9 @ 0UXCWwF6ksw׶ZE{ ~y=9'Q͜չs.WO5 ?5l @*X?q @ @.tqV7+\m,VAOֿk}8Y6WW6zkeɝ&i3/\S)f @'$ @ @ k)]苺{߬ҵm.*>=usxmwV痺}swf @'` @ @ G(zt|hdK&߿kU)tVwwFY8 @d' @ @ 睶" KVv6̃~ךǢ,(YmɅ4[wݾ ֿ3Y%@ 0nqS9 @ 0gUogW `̡tnҽIK~hKw߶kU @Xg\GMAK1 ,.Mfl @ @xfAkFdxb&sL֤fؤVCYdEr}q8$S9-xxJq^ @[r  @ @`=k]e2W&yZbv>eéemtٶ @̔@ߊ+Wd}FY;$~"@ @ @ @"pP.D;98 @ @ @ U>}԰ @ @ @LE`wʵڹ2gd̵wrnw ='@ @ @ "i߰IƜ?0{Әg9K7,H4 @ @ @@NIڅ?e}94ۓ뒶Y-_!['@~_P @ @ @L!]K̋,M.Nߞ5A8*l.>h#@ @ @ @`H`\~s+e]Nm'ܪLN;~z/7 @ @ @%t.]uwޢ},˨uv|sIyXOM @ @ @}s.Ի=Y|˦Tw4.0vǤ~};kFok @ @ @)s|z.Bn[dN C6Iw{K6t:vf;kg3|  @ @ @ 0M/J?^-7M}nˠ_-v56ʎ3zbs箖EO @ @ @%1b^}fYG{8g+Lw;۪7탠M@}߁{%{$'&}7dy?S5 @ @ @fSv%9fI"y굊}NIUxk$ @ @ @S> t#-ccM&UOsrU2~ 8@';t>oW̛ @kf @ @{ݓq\9d CycAh coca? j @ @ @LQ9}hgf 3%W$sqw\;==/gq.=.S]|z @ @ @ycs s}31C뉎OKNzl$V_xsr}k\#mM4Se?YY}62: @ @ @*ƍs`CU2Ҝ}q,ilmG_FӅ"]&zF2gl}OX(HI @ @ @u'yH^Q`8}ȞYFkdP_1|-i#mZLE9;36wO%@ @ @ q?fq?Hv#^'#<(YLǜ!a뿕dN  @ @ @̲ rqߙc'zgtM Q:8O>{UtK2t4{ $k Kv9v97K&@ @ @ 0 q*U<޷Y{v^j\h /Cz5)t6GltW_ @ @ @fE\]|^>=rѧnٽj=FgsΩY, @ @ @X'GYw oݯ_ɾM3Fc=.A=/y Ƥ{~7'g+P?=d^}D#@ @ @ @`u|~߬~}Rźlw^mlW;,&oN6MK7kҙcu|y 33gA @ @ @pf|[絊qo1,18;ZyKIgDgȈt>smnD @ @ @`ޕnn03g{6dSw/\?~a, @ @ @LT9StT7h}:ָ:ui1]t\gL39=|:u  @ @ @IL憤vx)ꜳL`DZw<)L l }:މF @ @ @8vq%3qSr軏L pU%86C/L֞S5C @ @ @#PMjZz/ tvwA~92IJ7mD.m\L -x @ @;찇/ZȤ D`ʕ$,]֐nS[&XwdNs|Cq37}M11} @9& @ @(P?oJ4#דoyg$~Of8.37A>-`FELD@~"Z%@ @z GzqvZͺ;2zI YB}51[{Py;;w7s2R '  @ @"Ó$"|OVf/ @ G> @,T%{P>^q '&G&U[2쾝=>s;YAw,\%;G΢W @P3o @ @&-pVv"v'v(b"*Xt7=={]fwgfθ}$X       _b}M"       I('a&       X~S       @RIy       @(֧ߔ-"       Tb}Rf"       ~7e     @sD㿣˼^e0{rr˼3ek{?v?w$k2o@@\K'ù     *lydї%cmޢ|;t&|m6N־u?i@&@@@@@Ͽ;˦Yn\m=lhg6Ѽhޑ-B@js     }ϴ϶.:ֱ5wo;?cϼ=Ӟ~z}Clf>LdoѸuߩ]}rkY3j8nknXWOuYsbyы&0ov~ͬ^<t`2 !Wck[_ߎ޳MνkR좀gϣ6שg~;n;5Qg]qA/ۘűJ\L@@j$Rj"    $pI쌃[\j~XvSِ;YvuleEvȮWwz ы43ݮvec_6a2YU{o'Ec%ߤkشjڠ:Z5c]8ƾual8yn{GXMu%;ds /0  P(WwsD@@@@ޣ>hMմcnfw0F+eyG}MkOu&o6ly둡ZŽߞ{{}cm=wn7^Qi[oP)_إk-[u|> @;y#    @ kY)Ҏmkǎ{m}IO g‚7\ϪLkpaާz>{^ߺyGVe |kުmq Vc  bS@@@@@97^9HL?f [h5W*" BI"   av:{g?ukqb~)6X3}9S.@:lcKi[~~\7ڡG,MJEm7l+=a-XT`Ij+ F7ʟsog,g}~0,2'VVvYm)ԗe@@ Yo/'   @ ,=TK֝4!-R1K&뒅ɖd^e g/,fssv:1sfbvˏf:V0IQ{'٥Ƿ2_̵egkѤ7}xS;3혽ڸKWf9n;}ln/0ξ}]~b+/ 6wA̓ٱ g[ul  P](WwD@@@X캭U~)ExSr( s]ևOv=g.lnxjfuj zHzN{߿I>.?~FZm}6g~5#ӽk]4bb[U~drdf=tX Db}5y9M@@@V`f=n6/566\]N޻O~ ~jvvWW2A>.:\E7o6;טҬ~=VᲧYP9S]d4WyXO^fx-Z&<Mgzpx.Ù k "] zFh#WizתYtU[5u20:m50u^^3kf 퓇״sǶzZmBL_]$~!Ap1Ai # 4@@Jq嘀   TQ*_hax~n)f>?99;>*{Ugz}=5^|of3>{V^u_m_e[e/0빍ٺ;v٬Ifۇ뢀wou|F8kM8xZf gnW}vO"秸{NѬxE3KhմujWgEq kVAoVK8ozџ  P1W   8N/ovKW{wE{:F=5k3'ly.*hl6VC=C}߆mۃƘ"g\+62Z[ǧ5[&4_kv gn=~Lj? x|y!  @ P4jv   e ޣ>Xn2[{zoF=YZNEo߈/,X#iE㫯U662wfVv|Ez_wgO-r3.$y.:4 @@Db}S@@@HIy{ݞ^EtOE~7t[PsZpf!%~|~Z¼kW3K6lßIO^\-[j.ryUm{O9  9 o9&r    +žZkآh=ɬq+k]Y1b(6PPP4ްmâS#D07{h###tW A@@.s   dzJ6|兊|*{?2oU{19=t[׊1[y}f_7;.103@@,6Yq   T>Ǚyի#z{'Bh6>uzfS6{w|x=~ƄDk|4^gJgׅ+Þ.>ZXN {v[yq2  @.Pυws@@@@ WZu6[oW\z~OR![|էp7}z{j7b=xgqc_2q f6{GN/[ىǟe`v|ZŷϽN+c,k9͖OhRQ_oeH? dL~w86@@@oI͞]_3ny­-kֺsx66^h*G#Z'6޿N3[mXTKPcҚ.ʺ,~\#"~+  ,v@@@@¸,i3Oм@f`l[|mE\S8e?`"]+϶٠]qb+[s: mf[n>7վyM\N;C#}5 a7~q}BkԠO3;͙nyv}[$߶ߨ]tlKڡn2@@`eJC@@@@6/Na=a{X|mBM$~؀]ىݚٔwO̊-=srub; *̛_3W'/vAwiXz\svȮM쳟׏  @JO@@@@@iBkvmjW]WfۘIKJ?9X4;TE]#bG`y/id/e/g{7m[=cޛe:   IYY     @& ,Zo?\dmxEޏ;t6ma{A<;UZ5$97gq0ټ:Աkجy~ & kewb4}kY˂" ,@m    dibթmG}pukװ:%~nZڱgk|Siqgq㖉>Zy  @R[*W+3    TcƅE9 ?{kVǺSL{ϒb|dxEӚ6wPm>6^y BSyW2a  P@I?: S^8S8IRѭv! _ *6rkڹ'k5ye2\ Q6RWn ~os4OAO!    @`ެQ {y|yQ)}s['ڴ|xvg,~l:j4[c~B#tk~{/-=ԲdOZA@@d%"3r Tt<Ѷ@Ty]4|2TH H `xzcOV(^(Ce[eVN~̾a{7o7'    A[؝/LnhGF_le[[϶`g֟޽{o$Vg̿>l NZ5-۳̲ziiְ>kn=mC@(Q O_5ѸE&+Uіjwvbݡc<pB>aa+aw 4xzB!    P3nnSwkQzhp3m[Բ7aWl=>%LzyOȫQr5GͷvÓ"^S]yZk`z  P@b}I++;ܛߒ *o7rIYxъ7m}rUuoRռKL)?P?IS+W+[('*^ x ^\~\yA ATRz)(A}zqݷZ-1D͚FnZލTT*W(By2'-Z{ˤ.0X@@@@j̳ieOYf^xouVF;ŞM?S_mm ='ھ}fb6[ ͙..-  @J]^HUS`.3OUvTU|~2@YM9EyT/{픗tG/VSWj+^HG+(^d VTW A_c+o(n+:藟t^[-S LU*yo-J:NWߎ+K @@@@)v/5nP<ښ4iMVX@ɊEp/oK{{ }o^xp/vLWwЛRxow/+Yŧy s 7^w-^PEWNߦ [/+Ee$xH{{2O^x{My36߽}-K*q6P[IonQSNޒ9^Ƿ5[Cap:/       @HVo,Bu5콢ӽ?SN^!2{Qollmʃy;tj[A^r` {N6TR^R(MSoHL髌SJU˸dM|Ҏ/(pgPkޣE~lydNGMV†E_AC@@@@@@@Rc;/wQ^6/{^^xbhEy/{1˨` FBY髟\߅/HmЫ[64+]_3<12S㝕ӔJ^K ; x[2TVV귺 7} a@@@@@@@ 'E~g7/zn>i^x!CXrH4hPwnkvO:vM:i(3i[dgnԩ+؜9sl]vjH(W?s0gpĈҥK[nFWg &L^{-o;lo}ҵg0)3@@ +DA"   @0rfIő/gӧO mɒ%rŎ ^Eb dو'dat>s)m Cʖ܇~hSLEٰa_NKp+}ϷCڀlx]@IDAT̙6o<{뭷L./ 2^}؅6Q>Οi      4T^VJyzɸ>NkegMGumJNVDfiZMV~ϴjMk_M&      kD+5 >frh9"-=Q JG.W.4@@ +8gAs+P &yW,[G@@rZnwKz" J(3őe*F:)fzP_ܢPB\m/kbMxGi8  3N= #  X[}Y{7^xy"@v @/*w*5#y޿'[ Wߵe'e\d/dG3 dIC @'3{!c߮!C@\`r. T?{V|8~ȡD@ '&PS9K5ⷊ[·9Dsp] |Mn| R " %@z;8R Yq一_@ߟh T(}]Qiڎ z!&Ku**"o#ӳitsh9 z!TQ FvW}Sw  ό@XGȮ]mP6i>U<Ƴg[AlQVqKXS9j Jy4͟Oki P=QI_(g*~AOt#^giU/NPܣpƇ*a@@ 0] LB}/Vp   @3TP?UV( @5^O"5rm&E})G`2`þʌȱ3 d@`h L OմeOs/ػ  P9ʡf/ @uWz*џ0wLi <{)w++vFG*\6uWBLuX(/+~_(BC@2Nb}ƽ%   %vj?92 ?e=냶SB'ӛՙ~_14JɖB#  P=(WD*ܣ*wP;do  %6H׉`/VLg >*Sh?3/HuB}U Ug֧!   3k%Zi{*@C~R[)Egڑy"  s-@@@(&e(E`G'J4/M0I  }k91@@@,Lk\E~J " ,ׂ() "+q"  s@@@(FZ[[d%?G,c(@2:r{lu*r'l@@(֗&|@@@r_HWZSݕG#EVU'mc?4ByDzڠ"rBC6VB7/! .@‰   +PSGv2@i9/5ϧW P3Q 6~ SVK0Ҋ(nqj0/[N>X9F9K;l@@ X_l@@@h#{W">iޣ{@GvR2UZ-|(qF@}v7};SC4@8g˖@@@TMu`kp.hg@xE;N+4⽝ӂ/L3v]L~~ȓӮ @HA@@@2ZX~QNJXYQ@~і9~׏p?Gܿ? (exY/L:S9M¤N5 nNph&'$@@`(֯+#   5+.EA?Ӹ?tF@ܢDv~?U·+,b }mFxDx"H?o*G+ p;|0  WU@@@|V:ġ&L`"51lqΑhc -$V} ш_!P :,U_)~1 @H0@bzi wШ=Vka3kh`vOݎ57:d{e  @v l_S*K#E2A5vʟiq5ndx?ܼtoeezx o|Do*WE3 Kb}X |LPr+П Kaa5kټYl쏑1]h~Yr;ܫڛ&ч`&Lz\lϼj+1@@ [NԁzSi{Lg4z{ӿ9HS~up }Z #`6E~#oJ} [b}X<_z׮謔wOm.?hmZ{.   @ ݧuB0M6B!@֑VXz4T{wF Kyʲzh3Cd:  @SbA@ vh{;Sι"vxmQ ?M?턢;6kĶ|c؄0@@ ?P.Hpkdw!ʜȁmiMQ88D\}9*  QCw?bLhڶNάUz]cm+/[F1:uX}4@@F`+?~/ԸB%30( @F ^Q3ӕE%,d#V/bw"  mh .Y+__t҄G㤣b?.)v!.֚ם6zxڋɼ" d):G[&( h=,J4)(^Cna yh  @K%b@ﷇj|y+M+SCA˻-C@T۽ʹ 3u%$@ SUz)+O_3ބ@K7Z@r_`[vrB^4y:í_#g\(YzU @^"Ln_~mu  @ Б}ht_Lgu:/[Mܥsg0  @U p~? PA ԏmyؼ _iO/686o ׳]l~&  @ nׄ @j(pٟc雇 W&  @ PR~v$u)cY=v6䵧͏<6@@2^`@ 6_#*'2Z22/hZә@2>st-\!Zc+ e-48N4@@Lb}gώ{!{#E@@rOlvFioKd: |JaDw2 XH&5o:YyGvSѡmu/柢O4*C` geBC@j.G@@@5~jޓ#JPBV( o.T64*P]?s}nR6| kx^vWJ55n_i;/5Yg{̶ @rXb}=]7Xoɒ%6`;߶>6q̙Owm_uUzMy2  T':Yߞm5SȌ߬̊c}˾=SuX@o0Pl##K84NdT*wj鑍 5brBC`UʭC8O)^ď4B yљ# @Pϝ2mgZlĈ`o?H>C?~͝;}];v*m{6   @iʆJy;~ʵJzz F{&e}n9>JKew 1AA0տCV>jo}cdV3RQ…c5~&%\hM eVF/S@vЀUǛ,prl/  >s9*\{ov_IVeA[?~xE _Q7ּ^zv@iPd7DA}Oe@r tzDֽJ*(^ ?5r[x"iԣ6_D(@u3WV J8Rs{>h5W}VTP:JDm5M,D3 @@Hݟ\ZBd~TIa3,@Lo ,f*}/@n^BJw%#UӾ?N')nf-*eY dkuJ*M1! @ PϾ/0ᶭFz'hFe\dZGl!զA_4n0W@@@TZ-+Z G$ݯ3BT\o) x|%"e\źjRBs-dUJ ܧڡq@@ (gXWiȯ P+K9 W{i   @e q^*ae(7r%YY$dnyIVYe8Zv2,_Q6ӆ_Waimʰ<">.:7TzZ.r  ^d!P ʋ?+Ո?{2nvF^V>S6 ?4A@@@Bb˸CKJ2x^.OO`:XU\)ʈ+Qҹxp ޳,6GeQc@]s_ d@?ΠZh_#+L#*zԟ]@G%)=0n3w@@2Q P~1qyZK==wPgS&eZ>H@.}"~VZi<Xuzkn܆6 rjWzy*wE @\+? 4/U7\ >F)֧"   6{%@E gn-\]F /?xV+G(UNRVSvWT&(Z 4"彭UU7~J**  d@eS3[KU9-?@O G1ʸ~OQnWY6Tf9J(P=:Ȯ]mP6ij5xl>h=jT:oNL?IoǂI/dn)'=WWwTHEi${np"`{O)姰NU-?CoEJOHAsJvfO@@rGEɲd?6`Y%Xϋ` VL YR+t!aAJ]Y[p·iT@6NK (~C%:ߕy~\(+ZiK*~"%_ii  9.@ZLe؟TQ?0/z^:4&)13g ? @ t.)ߙhqe_Bl_ux%Ň(q#WCz%Q34@E4F~J5NC@q8C`i`?եOx>㮄7:Bn {( n1d NJX~{;-{.ӡzhT@6~pu{:+oPhT@smT:ꝫ*  @ s_:R$-S%Z-ƮU+߆ͯ!  XOv֭Ӯ<4*K >s~i?Ne!e~:.VNrKNL:-@#{g6p^ Rx;k+  %гgnyyy+^Pe@AAx޽{!36ʴn\Z9X |R@b K+'KA ?s)> YFrW`Na-Rz?RJK|J@@\ dѭQ[5Z\Eɲݒ̕$3WQl$\̕tLG8gK[;0lH'+~%NRUڹnRb})@F@n#bvA}7z^I&Wϋ\ 㽩l>s-xoRbI@07.E/MJ;*ϕ0*&')~.X_ D@@@@@@@-HAP{2#mT"i?((E`#Yd:  @yeIuCjڎ\(˵!>sbcU3 xZ.;ieJxҳ! @H! K -J7%S |- Y?t@@@@@@@ X_9=BgF?ޟaϴW|       @% P$,MK{eͭ~eB>I@@@@@@,Xo^%ʑʼJgEGcǧ!      .PM3l;w9IS RXE@@@@@@@ POogϞWY +g݊<*Rm#Uu*"     dϠ&W NT4/f- 0Bȏd)O*mn^c!ɎgKng5WRpis-2WyPU(A_.[br'     ]e75]a WT>*.Χ22x_~QÅ&?L9]Y[U|gxYz@he@y( ʅZ?fa}c!     @ P7B=rDh{U&+UgUSI+~{7tKo荇 @@@@@(v?^]/>! zJv?*ޘ=U+u Tw+*~o7CҿTVF+޼7]_&eU[;m`paO;v^H^ |eCw#ŷ}6Re#5Ez R}uKbS\s M__[*Ӕ?YܬsGJ2Z̶RnWV&*n{D!     @5 @*cq~2H>ʇ7-VX4pܣT^VV g);89h򼲦2Pi~(ަ*ƆViᤒS?Xm3 RQSը|V7?UJs}_WQ*o, 5:*}?Nj     TZ<9M@*ZkTمu ^ 2\⽮W+{(o{i{[W9QyW9PrKkki`e ]Ný|TWJ;-bM/{7QW~PU.Dk1f&$گog;/8Iv2W"SvSTl7l)\zRb     kܠfN'' @2tG{џT] {/轇gNBvk ŋ^_,U6RJ͎AyZ zOѰhi}}/{+K믅9Nmkw%)*)?*~m~B%Ƞe"_@px#þO    M#:5*6=թ+֗$s@Nf9q @x9(sP0{a{_xg/{޿oͷˊ^NW{UZ즼sKǫߒ (D6Trhd^I~Kz/_|x|]jh/C l@@@@xclcG׍7>?/^`G]m|H[?m ;_C{A‘׌{KnO5NZjߪ䅘 @ xA P5ۈξrre/4Wߢ{kwW>ROLˑQV)^^x{ߠҚk+KXeM"5H},Qx!XW?wo(^,G"Ϗ)S>ϿrrJ\O(]mdž6m1 V\@@@@ik3l[6go41eL& j!6& kϿwqz2cM:m*D1rb=H  @t/a:6@-w^xgo?)^xGԼ{]Gʅ}A)O+^tXYx%ٱʭJ&zޛ Hŏc4~i,58r`a|5wQz*y7/z{!I~ӉYW_}MR◀k A =xM /7TSR++7(    q,'ߚaGnO~Q-^n^|[grI[s+ayvE?W6awխ<-/KvۢC=.D@Xb}y:dI{X{=7R) Y{ym+*3cCW/S^`J8뽩 Aa<:s+?lo-` ^Po9(ѢwEMQ+c߆Ix!I%|Җ)2Qcf0+    _hKy}u:ֱ7>kԞM[5mM69EZl . FC@ w({˙!@ xopxQ {>ڼGJOvq[Wim4/ ]@@@@ &{ MK٩}ǶWgؠڌ9˭Ai^2ĨW)4s2^mkj'<&" meW2ΎE@@ME1_ݳ vhTJ@f`UrvubسgӮe-CϴYv!P={;D}M[Լo͌%)7Mod]zx2 d}8p@@@ {}ivz!f6m4C1/'MլQK0~P(]93755ӹ/3SeN Tg.~W{Y+:lş[9_b{oSOϳAwvoF,k'^A_~ֳg-~ú5綵 ֬X@ Xo  d@AYANʉ~/*l fttG=33ffv-m2?7oҽzUĺlY^^Uߊׯ=Z㟹߆vT=g*~U Tg.|4ْ܄1Uvܤ4[4߶\8rQهukXK==šbu8#2_PԪi-u⥋ Ia[Cڧ! +˕<@@@rS`?:@Ez Hdh@*xzzշ;g6=XwU= ֯N|ݮ̝z3=齀{:f_dvƓf5ⷪNl* 6cRf/r^9n^]d{Dܗ.ӷ"ۭ}H߿KmeeV[M@\пv^MWl<ײ~ x9 @@@ ;?6ek=4c#VWClbv]nTGݾDzUEw#z_5J6{_u+|Zܲ%[rPwyb)W_E0?a=nyuw693\H˼mE¾ܭt皣o1{sT W+vf׌=/~8muT/P}쓇״IӗYyֺyf m*{ߡ@5uqvz   @|s5}S3ͼޛ5:l_*`%fsu խuv5.î[_/u-͎#3 TgǓ Ocp Tg΋bO3;"?3ن{P/RC $"י.NjL嵚 Ԯg)V(j̳mj(Fk@.+p   +0mM=A{<̮.ޓ~K>ls[ UxPEn|/fg@IUlS!}:‘X TgR{n}\8G3Q ^ʜYX2?s˗=zfv+#a@@ j   es]\2x^_U?lc6fQ^o|Y/(@U|Y΃<Y63zM?<ʡf ͤ{){o.TA1@@,ˢŲ   w|Ht1{SsTկ~|1f`6 M̧͚-ϑN TgΝZv0;Zۚ٪qE>s~3{F}ciYM{]ңiK@@%0@@0*kI/zi" ,X+V?uw];M EEP,Ti & R'=̄9ދ@) $qmE>t4?E'MwӾﴦ&>)UdǺi RD7MSBO_u Z"'^l݇X~o$U{_%2Dl,<W<2  PHb1@@@49޵'C<(2 ,!RȕOO#{!Dz7:8G: {.y"Nֻ?/i|Jhkq}mL/54>{c   Y_h*D@@(@"7T$#]eWWϯޅwUo5F{U~tm .-)bCP_΍" iqȪ9>BkzNa8XuYy[{kdz|}DD0+{J-Ck{TXڍ x҄p+ʿ.? P2%cm@@@ {;jM9݃-:MkV>.Qk޿XPd'"#n]ӝy a(@z㞳VV Xd}F =g/ 9Dfs6=AW\4g({νpz@fHvvNH+' @i /mQ   Dj>m3k&^30{3M4Jq{](l9[o@ysO$(B+,㞳èZW"<?W)tuBW$>|YG@*@b,   P2( FxH&D,(W{rW\s`SV=J &sdƹZE϶ˢ&]jHzncwZ<~slߝ%mMT]]}k:vzs,y1̖S:WU b /@$ [%_[3o;  P{gǞ}iii2}t駟$#Cs,.3fp†˲xOY(+6m̛7O"OY]     #~ [,ONIK|QUҭS!'k0QwJr֞ߙ-pOvj̻G7 +;5X)\22siNlڞ)uGʴ{ݫ @f}]BMnݺ}XʦMo߾pI3fYr5kH~J׷s8&ʹM픅-.lC\\@@@@@\6nt߸w1QҦq:uG9{[]۴y5}ۼuɍ6U,OA@ X *y >='5^ hk5iY5}k>sذay)u3f xeav/^j<3uп>     I !7)g{Vg޴n4ZekgN}+I_JrfzU,?[F=th0rR "Tt ڣh@ REms$':_WgMu|U>E@@@@*` 24Oa xjlM{Tf}O\T!Ҫwrs]vbĚļ5oѠV&ygNEF@B gԸp/5hΙzhu,Gj蟯@@@@@|&k6g-(S~KWm?.Ƿ. Jlt&p.nN ~vk\}NԨRP7_iJe/r׋eѪ*^@(}3di:4FV@\tFG`JO4ֽ:~|1    >U%`ٿ-߲a sPjWϞh(Eՙ&/*]] ;qV/|hy-r+3H12Үiag Md}^!5|5~ Jp d ֘_gNT_iP@@@@@ DFW֐.f%ޫԎ_9tЦz7Nma( ʳonsIǶ/6={d>@`ۭ|%d԰KOc"[ f봬44k4p:-yE@@@@%`vK$GEIKbITҭ> )Pdn`y=Fz?k&ak4Qi, lg)h[7KyY}t}5xQ@@@@@@*һjIqnwxG5,i_Œٖcb ?[.:mFByP{ @@@@@@@o$KnfƑ[OըY_.='؎cb LſKúCs!z?ghV#s"       8$KZ9C6gMo=tI4ʢ*FN1vNa>ЅN}xIF RU'|S3       @&̭z&,=`CijU.mY['K׋Y;hOݢ?hai$g       @@ /_ī:nڟa͕YvOhXeYa%ͶB+ ikxL< wGxE@@@@@@ Y_k/]&Ƴ_o}WHOgzY*Ƌl6[-CQ- D|UTti5~-ڃ#0      ~ Y_|%O *Utb ~ e=:Sw;elY!P> tFυ}0=Ck!=W@@@@@@(ſ/VhP{p-2L,R,S u'hX )]u       HϹ֫Z?)e#l2|@7ۡj`a[j{Z|       @/EU^,jCtEX,]y %_"=k1҅+̂,      e/@֤}"NШYubQGau$#pwt.ߩ8      @dl37LOb`C]352~i2N7FRuQTzLء|):ݏ3@@R`؊G8 kG=W6pϋ{.pM^y! Ԭ/ڥY)JËb)O3m4G9T b0k$zMW%Y@@@@@@Ld}i/E/yK!}FyWe[FN`K yT#t=5I54Z/X@@rrr֗ӮQyrVzm*Mu kuY!f 7Qks f#Zd}ᨬ6[Y*E;R<kL԰Dky`6rx@4۬%hFFaݓCae@@J(? G"vMڔ}s~ <YXse˦ +e(\i@J$sؠ?34"4<>]US~ u.5CwԠ       )Tp:9a4փ ]"UƧ%@@@@@@@ УLt.4:,(f֍C4(ܚs)<5 7YX @@@@@@(d}Q 2?z}q>؇9pOY{}@@@@@@(d}`G       @ D됆x_* [hm8¬jaVt@@@@@@@Ԭ9{D@@@@@@q!~p        Ys       $C@@@@@@@/@@@@@@@@ Hև #      _d#       @ G@@@@@@zG@@@@@@@ Y7       7g       !.@>oN@@@@@@@$o@@@@@@@B\d}>       Hߜ="      8}@@@@@@@ 9{D@@@@@@q!~p        Ys       $C@@@@@@@/@@@@@@@@ Hև #      _d#       @ G@@@@@@zG@@@@@@@ Y7       Hr؊GfEaVt@@@@@@@Ԭ9{D@@@@@@q2K&[?k>0@l@X@@@@@@" e~`$<4}`V4e̊eKcVt3@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ B9c@@@@|rDzhPG@j0P`  Y_o   &)z@@Xɂr0 "@>T4   #z?@d  @yr       @d    Cs @9 K9k@rYϭ       H!       @{@@@@@@@? 38C@@@@@@@d=       ~ Ygpv       z@@@@@@@,@@@@@@@@       Yd       $@@@@@@@z?;@@@@@@@Hs        g~gw       @@@@@@@@$ @@@@@@@@@@(СC{hPmUsrrk >|7}.\x&K \cx{$¡(ׅM\#bf}`^ @@@ %@>r ?..{I݃3p2;RkRJN㚔(EWs/R\3LG PHʕ8@@@c\>R?@(ubl0P,8AXEGs(QLR]-P,8J @9b2jP% %cՐ Y2E@@@@@@+q    Dv>DgSSyꣅ[ 5)Ǜ0'Wߕw_a16aX2?P{P{5;N "@@       @K͉"   P1v/ˋ*H;*|_@T`4Ru4?*22j,xzQ-S3%y6I[ yBV)    @ HNNNK妯/>Qc  +P_"Ku_Wh|7wQ^_]DT[LmPСC{hP쐙Y5>rԬ(WD@@@@@@|HԻ@a Q}y88@@@_jl=crƇIW+I;M,J#`G6gk#[nɛo#,g}|4_t} yoRuݞ5rKIRr_m`s_[ ouN_3qϖg#NΔ[KT'{Fet|urG$#1&eL_G%6qWSTk  PLA0 Kְ~ְl׼K:>WV KOԸW]lnL;5i@@@rXg^4Rz5X޻`h-I&>=Em[udW͓Dgz:ʬ?:kv'Hʄ˾hgߒɫ&Igȟ)墖}_v0@e~=ұq:|\܊>$ @qVoiƩuIW׼H'a3$kO4 ׼@~GcլĿV߶kc5hس~}5BNomUǶu:~hcUK%N\:8Nt;W6,'9]o(O(Ѯ3nY&E|]Ꝟ$앦Ր/6wkss\kIcn 5,.:NOA.hqlv@@YkkXJMV;޳Xxwަwxq;Qo KO׸@jˏӸQ5(^%9z]Wdl̺kײ.uUeMoiqyz[5rWZ[>y6~DvCC`Pd@@@ XHԻ=`z6VF^Wr}q{56sf5Nl湈4Ll,QѲ.u3}L7QSq2#  P2׼׼ {-{͋͋EEF4njƥ5.p}8 ړ%q 8x%{׹{; c%<*Lz+gi+Hn)%*^Hrz^ ~`g) .Ӄ%=ܓ:%?Bϯo-   @רDS#p2eݐVr$YZluuGv>K2AoEV2ҩtVmv+ze^K pMp{Nk^d}!u\_"yy] h%5(y5\ [MwkbA#9 Sݵ">;#Gz+IM';7:fSrxp+TmŮ̏r2%=e@@   !(бqYΔϖ7^vjy2fk~ɶEr1R5n~siU,j!?oå:6lN(C(]o']F\%w, 5 _%yJgﵖ&;&VլW׽G]/3= 3·> ωE$}55k(B~}[YIJYּ 8׳Vl?;OZϖUcS$Z4SC^NGpW%9Ȑ;cN@@A; ?W>?P8)ҵ~!kt>[<:9Iz4:>62Nkv2[Q&h{'?%2ӥiR s|Y;ivy²OdoF\"G0,m_"9w߳VX~̸f$V=9@ XmqdMR뻤^tiojDhtҰ>i Fƅs4k4ԸN#Lj߭a7XqT4nWj(uN"K$az]SY6P _j({&9GEH I40FZ]uܳJҳ%yl_&4~ggUHfZѷOS]Jtp6umk~ybtTO?Vzl%8=Szzh#\-%v~^CD, _/z;w.e:    P\\ZZ~wlߟ"ט4sTK{a0wzqr_v//Qb"c$:"ZF_TJASޟu5|V%}OR9U_o/|U*oS>Y#5 ˑw0}]F/)쐏.$O|olJ[/6w {Vʇ ~Z*Yʾ?R纽5}hoyd~wjG%Dɗ!?߱RjuIKY2~W;d+i!+!yB>FDJ$C{źpE/miֺRY;iz*lEkÝ%n,|n|{e^vW[:Pf cW'Z n4[/YWsC@@@ g9>%x|T<54Ll"1U$COX-啳ߕ{VK%Nk{O_["[5!GUTI[$*"J_:-o<^|紟|ΐ;$bkYMzɳ?&2)`yV T'4=_:)HVvӺZB@[PyqүCX/g_M\gpծkޛk 5틥V|9e$]{y=)d_/j2&()sR&k׋FɉOh?M(ZڅwDC4t~q^@_T] ͕ڬxwnw߯Ɠiظ^аD*Wt$b˽a_S5kxͻj[Nh. 5/S4Bdϒ:ZIlyOudJ&dɺ\,Ŗ;Cv+I].Gղ,uEe 9?K$姧£)=W a+w}rضNkdm,_v[ "N\:8Nt;W6pͱG[X Rk?c[9oj;=I[+M/9:Icn 5SHGn޷f(a)}u@@@ ,`MR|)\kOYX|_ 0~-^A7HLDL_3U^_$K$7&}iŶ۲ZD{~(Z⯁v`~Bü&S䆯.>JdߴuSeCZ iS>y͕ywNYxLVێGbM_)dލyy\^ve? ݓⵆn. q'DpR=&Vn(KQRWkƉEY0uyvʖO21rfcve}c.wsTPI{ZiHJUaͺs~hDąK!$vmகyt3'l=K4հS5^QGrD-4|ׇmg=?klsv^),-J֤_?m~55^bTiwH޽^&JԻ4\Y3CvM>衑GJ J l~Ӥ}Au /;fKt"vnNkeJwom2e:-\pX>T}8Q@@@8@S( xK5w\o @%#3ߐ~$i5&[Uk'/V@a߶JN}0ۦsӭcK\I}Tk?--=Lr3\/cvof@ ikfG췱5iXAT}}Md]b}}Ec^b&feVߦW~z4 e]QEݖ]Xg˨z%sdՐPώ{57[:<'~_}uꎮ)KFlg'ɹ;H6m[[6և=J:uM4Sm5FhWʑ%f|agjIpݪFþS>ְJ\,aq%L9luRxR-}` wNkS [w^;lw5J^BRr/Pj6pUa6㵕 byʜI{A8+OgIhݚEUo1mg[k Vn 퉮+$Cs   #`YK*^GX-o4*LrAdNHujsd5IO XyNjN>=3]~8Kf83 ݵ'Rk7_ijCD-d!?b>oc:eȔdᬑNkK0li"Kpp_9vܿNvaKXK KaR+AvMj*}~YiDh:%,t^ij)rqwˆu_i%)R=0'nogg˖Y{f7dqJFI%Qk[C;z[>zM4mۡCRjTF|7ѳϮϜ,e2x^c}|a4l}O%6aMA`w}[vyqN/rkb8/7WQ۰f p5o9(UZڟ_NY΁^Z9}[Ij/O(;2dS-Z~X56=}qP@@@BXjY_/+w-j(; q~wޑM|}L]X5wzwN+^Avgʩɻ _'Y"V/+9}Nܯ60Io%ܸZbA)=.pJ 6p&Y,!ڃEv8KB?U9Biy+;,qnw^AžzM|wPƍ=4JVR\޺}gͺ^cƚX+5Vi" T>⭬$iކƱ?UdŇ~v,+S.Ktܕ5{Ϫs>{ZN|{-H 4SMc {gQ]pIZxmOxM{ ~5 o;ٮG- x% o{q,]7us r7F_- *>%ݱ\ؠN>%Qے77Ck"Տ,RaSD3I~geL I4[k{ռU-$6 $59]-RL˒7QchEEb/   PkыFJf{wzM&>=Em[൧uM2Kfw\t{L{p>{`[2y$V)墖}_vֳY$o?X8ad{V;e9CиQR uE 8dj` Z뛜~+C䴞@iQaGˆrk?*k̘Hz]aإ 4`[n-hҘD7b?[wpw}8\nubMVi#?@IDATK5 &)jTnjA:Pžl}.ȫilK)K;yԧP=jri[_ί"$&)R9iRu\&mS_i.I ۤVӛ1pY_\vb4XjKL5}X85l5;] g>j\~@/LZdk~ K ZRM=}:_b o: MR]kɟ\3Ub]2hJ6sYjw%#7W7:^Ezَw22Fbu3bM;#IhK8$J"  iYZ~pޏֱrM?: [^'kÿjV %m=]r?Amv֖o/Sn>/uL*Jz4T/5FkLШPź@xK~f -$52Pk*]S$p ʽ"&^촖oo|e9I/;NQ.qqrFse9_9fE#$2-k}07lxF]c=)ߒ?K*'>DNJgKd|{bhBXgdt]U [ZK[Iv?S~S=^-1wQg;go-u.O)߯g=?@@@rشws yKʇRK,Q۴ +HDR5-R]yQZ{մt{l|;쨻a`F ܠX+5ZF/[QԽYL?z@bmU[ɸKj\ Q/dMR[ۚEЧ$Q{{V{Fz58uPDYsgvލߺz;Iӻ"+-]Gێۉ{JUӾRT%bNt٘^i*3/x-m=k$0o.&b{[:CsRRrSGu33xW'֦iU4RUԾtm bKy&\%0HVɺ1w/~سjo?^^Mܶʄˊ#Yp;[I};,8QĽhJE)Yus /ӰA*dnfWM0MdHRx9*U&Y_A.  Oq$'"3;S>[~vX1jyY3|Ed"nGb'N^lJ !W-;4G>]xW ,Nչ5s䬇#m|bAkJYm LqPdn sO.,yGc%y iUe8I X jƞ\'ewchh_{fI{C){Igeq]sZMr#.IdJ ܬ[DxBòjѰduVQOzRב_O4,_W*1eI۟X gCֈt&Flo;JʜT&d5jZe)k>6njJMK9M/s=,l#Ddp99=O^q5F a[ZۍNܷ6hX0: p:`<$|O=4l}df5}_]<ݿ!@@@ DN{)mKI+l[$uNvվkvPĩ5E띚~;oxZM5qvwtoxZ}k5W*?4hx3{@j]|FZYdI+^2&9Y_sӬ=NSޝnvWRTqrk{"G6hRAa- u1@RT={iqnXj {Tъ=%/خou K@.נ%ѳ%ooKZWg,ʍbueNةrvΊ\)i&zyn?Lq_]nNDvFjvM'9PYX Ġ{ݦgdϙUaķYs.v%SW{BU9BYӨEB^6@@(=.Rk-w&k+<4.iw,6=Bwwzjg#1苾{ AS]-yX#fI-5qS$Or"ּ5j9'[tԥoko?|8Yn<6')>m =G:cKUs%!&Y쿿>5%?[2Zz6%%m5M^,, rRC֦$0Qp]|E { }a |w˝K ϩ#  3˜Ϳ]y_l<>*^qabIrH6_v­h˽~hyweRr}Dg_["[=ڟ}JuJK*QQ2i`x'1D#|UcFi.QJLDVɫtZp;ItxHIFUQJ pMj %߂ %j8_#U^'9jPkDhX|W[4l;G*-4hܣA9D "&I[>Skoi~OZ-{+k&mfץy5^:-6zWч"dݷ;e՘mb8" wYW2t7hqS`X|11ZZc# L>fƏ{#p BG3E@@BT ;'[M#>Q<#]L_3U^_$/z-kVyZbqjbq s?_ Dr^yN8(w;uox}F)n:07쵺FG KۿNhQ.gm(XX٪1#|wcPYw]KIo1.μbceGd~[R@@@ T>= yA靆%ҪZ;y,kR&m5?C!eNyWUe#P-~rko[5 ڪ÷+y] c~ٙ=!@(_w׻+~GUEϝn=yU:Fփ{4(߲w\W:Ak*<޺9+kUe}R49~ZzYi?6C {Ww5_KAk3% u _Y{|q pL lO԰Ľ;ܟze zǐL-oIG kźHM$U;V s*Iٺ1|W=qgOl>s%:lOΝf7e rNj2IWm)^-L#Puz vLr@9vMϏ/hXsY&l_@@1+]/lqL{&?\@kqO\k.QHݪaGD"XW V~ҢjkiYHϕ?̕ТMOeGB@HCiaJh)KcMNڻ\ݮV/uטA^ _b"%II*$MS2ދ$e^$s''l+[C~4Y5M}(PV,xiL>~@BRa ۮkE]-.Vt] XE`(**(40BH0I&d&sswd9E x[MʑJoj+tepNSǪk޿[{6lsF斱dl_@d)+a)+" QC_B ?)*τ*H]+"5M߷TT^*unU.Q:+G( Ѻ<  Buھ#CO(Wzrv91w.=H;ymvߔ[Aַc?α=GgzwԹ2 d@1MKtr) owTP \#zZ+"/5e lU;{_ZZCNkwp35߱}v|˭;w:vtKs2{|wľKjv7z-ꖵ;@20b"ʏh{4@yZޯ7?UR^Dk\XXheu;g}csCϧ0ϲsmw~:~^wƎS_!%w߼u>t]'n@˗t~bi9z&/TT"=4hIb¿t_gĪ75ք*Ew_uQ0bܭTK5>SϤ=B@@|mt?v*ghڃſ?k뚽~嚕vGlܜnNݶ]zFEoa~zv=ӂ[%  Pgi[w]V<>OLjjѿI.h ƽpZzPkݫx]{?=̯v<+/i`jh>7pyǿ'Kqד[:c-C[@ WgFVk+z~J zQ)U䔋KnVܹu5*k?bmA}r-pY7߾f ~S`<]f_}9gkGYl=-}z|BgDטv,vcX s8Ɖj+3_y׫wRRvR>R~_@<,QcI-TnV|Ӕ/o/8vΊw$~R+5Vie{ekT{E?I*)*^?WϏɭ(KgļLQ+U2Xtk0Tk桖(*ڏ׿p抯h%XWkg\D"|"  @ %[^W6EJrR}4W|vCW%_9K)RW(~eMlG2QnWT.SPVZ ^Wu_`ekٟ1Z;sr(n_5ޙ>B)O,䉔ogY]z*[))u))J<&B@@tá{ׂ%ΉYY y516'`_gX}G+Õb@[b6eW~nujҥdw(@H}y/u^Z*7?ۚl]W:=F we#n1?=o}*q>;oYK0͵&ӾyaQ^e:~2n{콾k)ߊu۬mB(6V@olQW;{)gpܬx~)l~B9[SH4 ;nW*;)wz VvT*UGxGL^u>5ۺfuB+&:e`+=8PsΥOײyo5m+磫&6gwy֖?lUXFtwZV^މҫ+BNH8^;j^JwFW@)Zap;jZEg\xG%/l!kvp$wᶎzߣOOyWy@h?:h߷`ǼF7,,QDP   .Oݞk) =f],nhfsGn~bCM[jPvnNEkK9Ͷ-meW:Ȇ;y݀&;+z?(U+HQwl5wRוx ^Y[ё ^1`LF@@T8fl>vγzl}S]98Gɷ?~]=yrcESmү홙OXBWf{/o1OZh2K9[QPރ B(E+q(˪:y_xO2Ieo^ޗ곎wK'JU|'g]-w*Ie~2q^:Iفaf5ճcmZ# {N;O u_{g?~ߑ]-[ϫԦ8^?K 1CakVvkui;x;}ފ޶Pg^⮕9;V*ſLJYTwU?-;4PA6-~nYGſWhʿpOh\ w) ^/2F_A_'%ު$0  @^Nݸpͳ&s*kw/~90t5;]e~J5ӎ ?us휞k 4틡eK#͡ץѮFO?[lĪYjAyR ~N5/m   d@l;؎׶}_8r)oH}|_Biӧ7"PrCU֐e#2ًk-oio:;G%jtkRWOVۉ^Ae/w}o,VGwv7B@@@2rrrleΜ96yd7{ܹۺuf}-zz:k   Pm_\\`qҕexi(ϳz}: J$=O+[:u;[6l})2F CgO0XscQU6 !   &m2ww-0_a;̙37؜wϚ5conݺ̓H6~:ɺ@@@Z жqr2?7:5R2w'&o׼s}em MBhG@@@ A=@?2nԪU+;cm…N/,y{=1c|֣GjEm,   P W.&^jcym W*5   dСCoM=0~{mN/_nZĕ@@@h޼yB6;v1׷;X߾}C'J4"   Z_E2LiDS*+Jtc  d?:MWTk׮'ڤI̯֭[!bm۶ MryG@@@ +cʁJN>gJ/:k   翍̴~^x[hoS^^^JwtܡQ`~@@@@ZVRNUR*ꬁwc""   @ծܨYl< @Z sI˃`@Z  ,^;7Yk)y    )$]2U ~ Sh_6J@?to !$J%z@y g-q,,    P ~V*%=2/7}`$]@?t'] 'E0 @ Ih(!IE@@@8kJzuYw#p:3@@ߢ|k@ 1,S} 8JBV`@@@+V6jWŊ ɘ@@2B@d;A i!ry`o3iPk|k[́"  )))`-șʘ`#dNr @Y9%GVwf-@@@RUvYeQַWx @TߥTy'@ԡ3#HCY0LJ  @{_vHÔ^ @& 10dž{.Q T7pRF-Qm"   @:*,`iB!   @x@ON{ @@@$g?J;B@@@'@   ?>S    dkJRR(@@@@ "p!,xD^@@@@ >,Ë @@@w F8   /p&?\ZbLE@@@Z,O{~B@@@8n|>^) 1   '@x;j~mJsWWi-鬯!k@@@ W8-2ѷJm?0     Qs5^hu<$+ıJ[t@ @@@B?(    t]`:>00<>T @@.p뫾4ZMTfo~e}8=M^@@@@ a{_# _60FFE@?W(W"򀲙{5򅲅r2TSW.WR ʩ0?5dٸsc)@@@6 5Z#F@@@X9GE}f`;+(*iR%Rwk`r2VHJrZJהd2IitTZ(=BCMǂ   @| Zm7ŷ@Zw;yUXKs~9   lwXի_=z;ηQ;?QnQPV&*^s=_ER{wV|-_J愗ia ]eY@@@ nb+.~֬_3^}ڐWX,0     \zO~|N;ҋ7NwV[ '赁W?xO_ࢳ>@@poT8    @WkV|Zko{# W/`{OA:ʼpsz_yR9ZD`*{Z'A@@j_1ì;X ݊S;?]gk;?;#lʅXlmg{>O,\o|uܗo{=GCáv8l{WONj]GiK!   xADpV<<\+xѡK˺?ihΖ,P|S^UR\t'!  @Ƀq&v͟kgo.]lOmϳZndMl&^fVXThpjzҡ9lحNvY.z IYV־qмkaݛmn i    P9%EviQm= jx<?W;)G*J+7cu   @|=֯ӑa-++ [a/^b[OFyCޟNh؃]7-SK}Yw(~k|%/_PY\Yp,  y㭨x0Qmvg`:}e$[[\hwj]GOlkCߛ5j _?KV_m_ Q1G`UcqpY@@@@ L4ll_qU/ů?Wai@IDATN;|%jYۼr2HSy[Sډ1J#4ee(^]Y9OBEv `Q@@@ iԾ ۔M}$B:45`ujTWn{Qv#v‹YdR '~a*\s @@@[o@O\q9   F g#,';Ƕn)}hJָSlب/3i ?>b~c:ղ?~{hf,f~5oПa5曧mvgQi5(    Kw{}IcK9EG+*~-ߒ>4~u@@H Q_&\`~ëi۶i7;ܮx[ޙ,oZ!'\h>^-c{9a{=qvٛSWVd;JީIWuJ;lsݟ 1   ɿ/UgiR-4>ϒnSOwE@@ddnϛ>n>^N}BٮqGk-=扔7'y X=]e=y)3mѪ\ϳo٠miur .Q+%jQR^.gY#u ;  I6߲ qkt\K58e]t6aY_+v@@%PT\d'= u%}M 1Nql3uWrsnqQY۵YRI|~},"btԧ   }/TС{f\Vnt2F@@*)@ψ}ESmY1/mx߇*s3@ pE}ʿE   @ xZ5MTJuԷmN8S' 6V{A[?$8_! @@jq=NúŦ.V7º5a~K WuLkvv    ]וʺjӦtI-NG9P9N~}P+ (:*@@@uھ댯uC4E9w7+#B@@ E.=o`q Q={i3/(ͣ67SN6_i」^w'Gr^{(w+ 6a[^@@@x_s};:o 3   Sxx%KHGZߕ'J;ԯPV+SF;(N)*jfѷ)_VtWݐ5   VT\dV.-Fh?5Q@@@~" )%/Gi( }F 32O9_ J.Ҁw{G?o3b%VmƇ܏TO$A@@+pۇCگK?ͷV+~?C+,Zc=7ٮzڬ{og`j]eAA@@@lF:ajNĖwҟTԀ|Xix g^*>?{~~]wF@y4=1AU&)|Vo2r/ߜ)|s#˔ d5   P;ZO~[ֿ?ߖ=tڥ$8N{==1b뒵C~9?O}X`jN@    y%ģJZIۑukW;=wREM+-JJG@@HngkW7ΰwxێ|~ϚQll?MV]-Zk70|m~o]mS;٣=n7|i!\ܻAZזhoD۟3(   T@]meQ?]m;+mS#\~?/Pgyk/yYKMޢIRF֬]c޿FL3t;Gi{OZ~Õ-_a?㕥v@@@@ hUb%ӕeJ16K™M76+JwIr&y>q琌UVK/CF @@$~l]v w'yN[zo۶ny?vmghx' {Gow<,6KϼJ~Q   rt+9>Def7NscrF OTGڟ1H"WSD @@xαi'os-^roh};۳M/{t[zw8H-/|f1:/jmlVoO3JgO{;T  @@@j^E9_nSn|'WBg}:{59?_/5>< 8W(<1 n^@@[ ;kV qs^9ywvnׇ:}mq=6bڝoghL+'|t+,Zc?{y}YVsCn\ |;C)Ox   5&p;؃ov2)4(@g}k~VM}:#\5G6T5>V$ߕ!>B!  PS6a/_Y׸n+++P/gMLu$=O+[:u2;}.X~7go)#Jkq\?S    P}iSw+ؤ^1M`>2Ȕ%;SHZk^5_aQ Qmg`=Z# a? 4!  mߠ>c֩IuWSʆg1U_eR_@@@@ ~l?~~S.VQ:D:}wjR%p&*f]ÛƗkxӊ^skoz/<@@A9>vy}>R40X/ }?X@@@@ ;ju+OGV,>ܞNʇEyz~+ZT*g-~IJ}A!Iÿi'##De[B!  r|mpiZGx zN-9P&+'0   UhG(XSԾrz!1Ph xw1 h"4ujI-6~@&f6FH@1W%ݘ  P7RoM-暥}CmF+SB@@@ xڙMʦ1_60︽6t(MfXu xGwt5TC@$<^m93;}xE@@V^w(k_ (/+EOd@@@ESJ1j﫻Og,x:ߕ$_ıX'ı Pi^@OV2D3-~fqW(/w!P5 >G@@@R5VVEOd謯>tR<]tP_+j8,m ŷ}ۋX P21TD!@P͝qYnQ[ mͿ(8BCy;ԲMſl8F!    @Z^^g[ J\'0\35Y~%6L ӕd=K;[~2ߺE 0#^r6fYA@:t()Sş_]2OWyHYP   F;EQi]jL|7B ZgT~vY(y;iJe:}~P2M!  @r9*jMAOZ@@@Mu(~Xh:ꅐJEg}*/sϒpi{+7bs,A@@$ x}/?bQserP   d@؋[_T]pw=>5އTۋvpU.;*d xJnR3;  T{!6@Uީ?JnW^   @tБ`5{0@@W`vJorR։X%e򀲯P   @=_4wkDj8RG]4>ްj݊g{'J紇T;R1U8@@@ Mfj?*ʉʛJYji((+w(~R2B@@@jLO4϶~%VmD\5S^T4>M߸$vYE?3$n^KKn_1Uv[̏  !vO=@gj ~..e/{!P   ":\iD"5xG?q4>߼$zږ?qRU)IڝW,Dc1M   e(~[j (*U~?%O@@@H޼P;Jߣ#k)gBe&&;?&x;XDtGů V11@>ߎRwVxQ~VF_R*5X zw(c%jWw݇/5W%b}lZv/ f?=|oW߯T*?y)ʦ    @U?z'.zIP <ʠCP6BM"E'c/ů?ˣu@e&k&Q G^>!=2nWw9eOBէ/Ż2^-M\D IqB!N3Q.NR"LZ=3NWj`//qd3Z)~rɾJ)ze?TKWzQ_J#;*ǖbu  O~9ȿ`T?   ~auu{ӶPR(T YoFzrF,j<KY/F&j~`g[ R+;);jfIet{g|ů {W#;Ŋi)%wSEoe@{Kj$P__{*uoyPןO4^>.RJ  @_qet&^CUh~@iD!$_ ﷵ9﬿^Rcya_p֫w7PSWTV(~B@yYo.z+JvOO%_|?4xBROS{ϧʗwW ,p++C  Ԕ#埏NR@q)*~tY'Aߣ<]b)6ZcA@@@]V<޲ٴJy_,ir)i)yl7tG:Pƫ_Y,߯KSaw%(@@V˵۔]]}V+deLIaY3ю@e謯#   :єpFw^ih~{/Զ22?9]ůlFR   @2N_~{}yUO_E_ȴ* Y_%>F@@@ -h/_ wد2tdoDRM1o53h[mk @@H@ChrF\wN6byAtW@@@H{bpH[*|o!~2Q?   TE~U٩+YVh*,#P%:   d _A;;(| WolkA`eRP   @e:iqNL}{(Vf!E QUP}`=    ~\o?L9F4RvU&(UZ~3Ъe@@@V 쨣~_lGcy'>@!Ptׄ:D@@@ V>Vt(ݵ;S>V  /W8A٬ zveQ$[Y@@@j_e1@ }|'z]P   @<iǔ\<'iUJA90 YVVI@kMͱ"@ C_K=e@YvM晢D:8c@@@ %jS_UZ_'&V@%謯$#1訏BԸ@jW{ d@g;P35g'dž   ,++psHd82-<@ Y_l2P;(@ S*+ YY?[ vO8̬A@@ x(^ 9@@@jL5u6   @ Yo)$9yp@@6ݩ6j6   iٙv@       .@g}C       @ p{K9 @@@Wr+x@@@ \YPNV       @tWl    !6?C@@@ }<2v8t'#   @ d z?@@Z#=\ך@Cgǁ,    G]f G   \Y       \Y"oې1Ȟn`>{      c       $]+N{󳲲ԏ?f̘amڴ>}XnnbޖEٛoځ}Z˖-cLZxrW4+w*LLYQQM0̙cݻw=ò:̪@       D 'q3uz_|6_~e0͛g 6+!sϙw>{-[?7z%ݞ6FFNrH}}0>}M81?C^zI7KD@@@@@@Yv~RxPN3vU0.=mVXZv`zCe"@ ){     5*p_DݔDUp|Yj2quP1sX:S be?.     @\Yeq;]m[(m$rݙdm.JBD:cD*DŽ     PZy%VgSj_lʘ@2z9dL„J1)eҐ,YՔAָ$ 9.cR*QN)QFԑ:u>{ᄑ}?sڭyvsbҹ>=*@.bnbX"@ @ @'+]ti>_-aJ:B*/k ,FI:' @ @@=]$w~Ys߹ߺˊu1]&ҿػ&m{I t@6Uώ~S,P vUkX @ @Pu'%UkEg2vh6ԾAkݥئ쮁Rz%/ 9Н 9fTk"N%pivv= f?d{ iv<5 @ 6o7SnEN=tYᦤfujݥئW:8fߺ4ٹxF}7  @@7*_mF{IWfO>|w  @z[}I}dz,gC]ԛVOIɇn>= :RlbvK_L] @ @ @N9*7_,@Z5;e7oYmvŸ2~;rnZ.uI cz xMmd  0 w߽:AIz7k:_mM7<=M~&/5yXRX\;j'K~\ԝ;&mL|9W'oHNKh @ @`A hIz~TREf~30}&noa-}Ͷu9gx|7Hԇ5u(㞔I\I']&ueIXlޓ~rR TږI]M)Z27f\R?VGsѤWwK6L.Ij;'$~WO3'f>*y{"iG*C#@ @#jYn/2]o?,o55ee˦Ѭ AKkxNԗ-:(}r!'ntE/S^nͭy/ȰZݱ^wo\ m7~UdTi+3f"×$ui'7a}uz @ @$GYo mg7qmՖ-fգ ;cs/$ژX?fs#@`fI3ۘCb}%g{5zERw?I76HkVߺ[6meFڋf_G~>Ǩc@٪PXߩb @VCWorϲܞ<8ZF4=8yEϿZ~P@lnw,5ifs @@UwkOJgjU;/TWd.nwVVS-wD>N @߸^op_{ۗߩ%GMc,f"x(@.FwMsg\b @ 07&՚Z֔  @ 0_̱uzg߭Yo<|7r5(̪K׳/|,#Wԯ @@W~﬿W֮c$o&͝wx-i%޼Z?ʯV&V;6'tdQraV_~үiuwC @ 0Oρ~s]f9暧+0f[pﰖ;tS3*+֏ v XkdV @7J%U7yBޏ.yHR뤞䴤)o~~edϓzLsO&U`REIɳz)A#@ @86hc4Ӫ Xi L9`۵Yǣ ׽#ɅP 0IsQvVzf+3^E"a}kZg'oMnL#kޫk#2}Xk=:vOT_I{{U&Oj~䂤 @ZcrdAYW_Vf'{zF!pQ&s4'6+ @ Q5lZ'D~(Z[i?S['̴~ @ 0MJ^>n_6>mfi39zC2迿fcOלa \~XC&͛ @ @ @`gdMaU?<9csvH:lnkkkYgD@.b}m& ei @ @s( *VUTv~9z LY~$W'͵5l~O1hPm 0'Isb @ @LX~SGl*Ttۡ9\j\-Pn''W&u6/lh#blm& nf  @ @r݊U hG#ŷJkqI'_؈`m9 xM @ @ @`ɾW%M<$Yc9ֿB7{`NɥI֌h#PM 0gIs" @ @LHo7Y͉TN9Ó _ؐAmy xM  @ @ @`g;yQE_`+[`~4e @ @ @6&-;#|򓒻c{իJ9Oq0\q95 @ @ @ .૬b)י|=0ܓq95 @ @ @ .0;S@[wԷӍ|Ʊs8gSq>G9z|O=zO @ @ rgw IDAT&p @ @ @ @`C<;~S2cd68?3f lEI0' @ @PSxQ N؂lץz} @ @ @` 6crHXP˗-;-:hK @ @ 0QwO  @ @ @ @`) (/ū  @ @ @ @`s @ @ @XK;g @ @ @bD @ @ @bRΙ @ @ @&*X?Q~;'@ @ @ @(Xs&@ @ @ @ (O  @ @ @ @`) (/ū IkGw[m_6~@kM3<0{kzfs Jg7H𥃬eZ{]Nۦgھ~ @ @ @` t:=Um~{'˓Gu?uF.O~!O$kMu~V00aS7V㙩uδ\?,f{ @ @ @P_b 0VsZ{k/%?MJڋUv֚:$Zg  @ @ @08/KNڋf.IӪXYrerrr\Ȥg$7$j$_}l͟]383֤N﷽+ \ܒ|;Gds]R/tkOIgu̬"Qm}eI== I2y{{qOR˝<+Ok3!9$9(6%%@ @ @X|  @`?Ϫ&}2}arjR^J0_lɤ#l,"Vbrݓ$HޝT+%z{hAR_ >G׾y|!o"4lwIG3ܶk=1jM7ӓW&&퍙xgR-:ƿMmk$K>ԗNNdY @ @ @?,E ]NN5II*X_};'MIN᭎f[2&;_Kr@R~[}y *XWQ5)yhiQIݑY\OW_,ucR|yrPRmoÒ%%&eYF/<(LRvdRdd& @ @ @ lb"@9 ^T1GW}ͫLV'g'Mǯ,YtnN]?Ëú{sm|5^;%u%M{[F?n:LV$ם(Sy ٯ;s%$iʵÊSɗ% @ @ @`(Ca(PjUD.XjReR*BWnʌc]q6H V_3/|pmk:vY:ְ[>%yw['  @ @ @`pͬAAKXP]R䖤ߓV5ҿuYW]uCk;V;Iw̫~gM[Tm5޹m9'sUmgc:'%''կ @ @ @%Pw`j 0Zs{~IӪxA g&iUD;wK]_irzK2V"AoALu[_3zckMGO6jMLܱW{%e״Z#;f؜_ JNI'dtYZRujRziD#@ @ @[&87Kkn{VI=fU3IN&JX~Z2lMINI}5}fl!ycBf;*.ã*XRdefrI[UQ%'g%պٯ/ @ @ @>,By 4EmUȤfXJ@qrx*9;|[*^]cV8_&UTmԝ6O&k':r=##'&܆I3G=Ҿ`jC[\LA=[!5~  @ @ @}(dSꬿ|m]e}[2媨Va3ڵ-unPݾg_풺sI{[}"ԝ&+fC;.coNKj{t<=y^;&d-%7$Mf3$@ @ @PId,X+pU${,Xwf/mCWQEF @ @zCݚ @ @ @ @=FrgQ|-=b֏0[ @ @ @I ~RK @ @ @KV`(իW_q>88g`6N .`  @ @ @"0b!V2 \`62jifY @ @ @ @ @ @ @ @ @ @ @ @ @ @ ֺ IENDB`concread-0.4.6/static/cow_arc_9.png000064400000000000000000002376321046102023000152700ustar 00000000000000PNG  IHDR" 7P"sRGBeXIfMM*V^(ifHH" KD pHYs   iTXtXML:com.adobe.xmp 1 2 2 5 ώ@IDATx \Uـ@¾*"0":,(("(,*8 (*(*" 8)D N @HUIR]r{yÒ?V) @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @>_e!0*Ƒg#D4 @ @ @#Ȑ{#ߍB< p|\9VT;=実Ӹ61y2^c}@qL]+Ϙl5 @ @ @&͸r/՜R=r@$gԷ9H&~`o"vgtu9b9үZ.gFɛ"\!2=2Pm>gLy @ @ @Řs&_[;gs;W=ܜ8nEtn:8>9'V}l,b?vȯ# @ @ @@ ?T.Y}5|mcE^HNW #6:VE/-ۿ,`ȉ_U/٧"ٲk=82/J$ۛ##E9+|^W3"?&ikFGDldz$?ɑS+qq:Nis;EWG9_ @ @ @4@"jp̂lE[nTڿ2g5T"TP/#T_,Q}Y:;Bdvsdq0|N]lF`?QiYK$ >kjgn~oXfk$X*Dr|=,{ӏ?=r&dڎGZΩ7u޵,s-?{#B#@ @ @h,D,^USɗ3沸'"o˿b;ge}Nd!oH1udjܨiynBdmY\SQ-D6VޱTvlKb|k6dAH^3xoY3.,jԏ&YcطU$?j!;|q% 'Ȫd7 @ @ @L`Lbf5猎FΉ|)gdHjxfTk2gW60lu.ʲȂ@Yƚ9l'w.5ǔ>GGdȼȃj΂fc9q޸vgQdHw Erbmr ͖azYweѯbu敋v/\/֮Ye @ @ @)z*D2 ^F}`;g3,D]Y>o#GΈd!3[bXsXfq=^Q-eQz_וf2֜ pbEV6;93vGx>h=.R-jGʞ1reY]+w3]Vzk.Vȟ;+ˏ2_ {h>  @ @ @&"aZrZH@̙Yt׹fG0Ų#Y:3rn$ܗF-}pmhVj ~<ű+E.x_92&s#E-_={O䝑;cidռH6 cRW=# Yd͢5/rIdHoZOu{;H΢=!H @ @ @E薯mԾ;u"s"#Fo9{rJH~"y}^HJ?"GGrV`'Ymc#j7cʎ,993])1"ٲEvLTc#9/͂ޑt}iU(<(}c$Hh֛~彲d:q^wuǍj~Y9w5jW7SWkϵN @ @ $8*DVZY˶CH27ET'c% Ry, WF>Vپ.ٲ3 _OeOm8!D݉Y Jl?Uٷn,B*H~b"+F,rw$ed!63׋dTdF$eH4gQv֌Y8UQ;M?yG}2DdSC$ޣsX>T`yΆ E @ @ @C`R| #,nT2gվB!̙F<9QdcP^qZ}mYD:Mޫzc\vY.#@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @K-жwp8cwokk;=^ 9pԩuq @ @ @}Շs"V̏59  @ @ 0 N<J3!k?Vwu @ @ @@ (D @ @ @ P`5+q]L;~ @ @ @Pa @ @ @.9?A'@ @ @ 0 "ᇢK @ @ @FBH @ @ @0P. @ @ @cF @8sȸȣfkkĀrD,h @,3"G @ @!;#XF\0RmʟdmG#G;2-wd Y7ț @fPlO՘ @ @g"Y|m}FYƤѱN-goСS#Y<8ȗ#c#F dԧ|{ @i @ @G㌷D|b}ȫ#[G#9sȳldOGih @ C1 @ @` Y2'c;29k֊#GCE3)seu|l~oc㻑oQőږ3y0rBzXm>]{D,Eُc"7Fr,FfGyW9|= "mAӑYfx kb @+Boo @@ %,e-g fj6œA6}w"G^VY#dA/ Y.rAՑ<'S&3#s#*C$o4|j~'#YDw-3ډ5;ȹ̾^y"ȯ"FΊɾm*RmF%TZl_yKș|k!ɖ U$9H,.͓"?|/E$gQ^On~Y>fA"s" @Pl  @ @r{"yS,,ZȻ"TL,əo;e.s.,UE36OYyTGQ]{[<&rH$Ֆլl,߮l˴qFBl9{2 ٪߹_/<9;rrD#@ m-1J)S)5g2-rc{b5 @O gmY%rb۽ikI##Y|0baY ,N6jeE4m[?g$g@ַѱ#y6P`eBv#}oDu: @\e @2,>O_m/w3LW-uW{,LW- >ۙڶXɢF @o,j @ @ @Pt3 @ @ @R@! @ @ @.9nH @ @ @B @ @ @\@!rIݐ @ @ @H? @ @ @ B䀓! @ @ @ ~ @ @ @p'uC @ @ @ Гit @ @ @0#r1 @ @ @ dhoomu]WO^b6,~ܹ7M5kVaFmlsk @ @ @P\ҤDA@8}i_}+:,2yW^YnᆎM6|#)JEʸ} @ @ @ 0gGk~/km @ @ @& V[۫x[|nG @ @ @05Q[R߮{ݱ=~ @ @ @,##?Aq߹u;i @ @ @ n}d-BΏws@b30y7~  @ @ @XYӑBw=zekd"@ @ @ @` bG]IehlxH{*^-ߍ,ge @ @ @HKؗ"D]}hSb[#@ @ @ @` ̋r:۰xسjZt(;Y @ @ @,CU~MϷ3؋M') @ @ @8([>ۺO9#zlr  @ @ @K[qj1ek3nk#=.''@ @ @ @sk zY ykSGlj_<M?jDo< @ @ @ ejAYVn.<׏{/GW]B @ @ @^ëU؛̲e>o''@ @ @ @R?i+M~mƹA @ @ @ |!.M85K~]WW/S\D @ @ @ *S8g}jo{\D~v @ @ @-3+LJN7CO @ @ @P`xvWEg~˰o]=z|87UՅ @ @ @ 04o4*MMʹϊ} @ @ @Hxn}!ط2O_H @ @ @H"I=j];.Cvo\G @ @ @ #nّ-Ɠ#.x0 @ @ @=yXIc @ @ @F+?Lq=/ÿG  @ @ @,UྫDچA?t @ @ t0#=cwokk;=0V~_S^֯"fr @ @ @`F <f)Bfrـ0  @ @ @`"7]~WxC)lJ;"@ @ @B< @ @ @@SiQ4 lv܈ٔi/>3[N @ @X(  @ @'p_r|y]_z~_S^֗KW.K}&@ @ K 8B.Bs~#I:"  @ @fB]Y "G @ @ @OwD6gjD @ @-,Vog߼W ʈ @ël@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ BY{ @ @ @Pl@  @ @ @ {'U`ʴzI˟Ϭ @ @ @DaAwceڴiez_Wh {/#?֩vw,XX\{sy;|iZ @ @ @ph.яůߌ(Uui9f9~IuglnM @ @@C)S,gnu~g߼b۟-&b!/ԋ=`=7#UxM @ @@ e|ہ]ewQnns8ȁ։ lȬB Ë́ؾ0c6  @ @.fv9 #6,\E'[#@D@!r@}Wŕ7F^Qwb;g'R6E gDN}:]Y1 @ @ @Z`̰]sw#1oD sZlwZf ̍}8H~Grj-Vr"$@ @$x/f3?nzƣڛ.WQ;t庛g//h}b;'W.w;ز&߻jYnS?|o{.M(GwŮݸvxEj+);o;je٨uNB]Bo""Eꋐ}Gyh'g['v-:9Fƃv_c;_޺6  @ @#Bౙ-z|k˯mGS.jvj>O*̘Wʃg.|p8o~"夲*lF9?Bxqe;fuEqo9ۏ9.(aRYͱfN㘅B,K׏K׋k~bcm˗ן9.2PbCHOm'|V5E^]>>Ϗ|ekF @Y=2\ ˆk+9=;o_W/xm:fI~3v/_Λ~7cGPa0SU疯|yNW2ۿxrq-׷.?rߞ.;ۄ.臀@kpIștE'c#D*ܜ+}lyM^ۊm=FNm!t~c_hp] @ @e*pV(Bf'eN|[@V03g[뭜O P/l|WE֨;!ӺyV?nԟka{Iߟkz/68$ȳu'g27l|#9krF @6[m¾<ؼc1`WfٓyN|jm[7^:6vkڋkʽ{&{q @ ,X9X־N3S"$^׹FZ  @`̖Z*fk| k͊}9-gCU;+^>,J9+٣ѧ"_sU,<ַb&@ @,K<尯=X>jeUǔ]n홲j+ȫ_~xܸQuO(7rʅOuVS޾JCXa?~en~ek^:L+M]qb %@-ob..oTWkk/ƏGVyNu: G|/#9CHQ,/,^} @ @5PV}X_8 @ @ /ȾܷK:~U~\77/iK WI՟F&,nG @ @ @H@!o% HpkWG^౹/iK G=8՞Y;L' @ @ @B@!x~/v?bȍ<ѾF> Pv~>q]o㴟w @ @ @1z-qWrK2'"nxdcr=i] 7|+}m=#EzӶқC @ @菀S7N{u/NGG^b["å͍k=-sֵp6{>ȇ"kEvy0S'I @ @ @+ٳJq8=_9"Qe"wFk;c5 :ծםL7əFpݑFmyr @ @ @BY{V9k#E.<I/|ml\zifY C%2 "Ֆ3}'j @ @ @Ps8|H$_ȏ#?<el0:R-J$ֿBe5 @ @ @ (DvO8ȌOQGQm qE(HU#@ @ @ "=N7`]F7joo/rԩS/.0.3fC{ʮ @ @ @F A!'9ޗYUKfɬ*aI @ @ `"GLZ( xcPіj?WAJ/̮d'@ @ 047c^D6<fD7J*}6  @ P"@ @ @@EDZI q> @@BdM @ @ 9; K @Bdc{  @ @G`h\a-yD]_I(D`X%@ @ O5oG9;51_ +v> @@E`PILvP=iì%3f}p @X*|EOGV>~d$ ZC922I<5ʶ P#`Fd U @ @?W/rGs"dl<wP>[7ul @!0hIx(3d(3`6}V ] @ vg5\8GdNcv|Sb ց$@)0@@IDATk3Y߼lf}7s @l"WD]<EX/xeTu  @ @ @X;nd~A}_ZG rHky[GH  @ @ @%0!:>TQ2gƝ  @ @ @ @GF so"F4 @ @ @JqVW @ @ @ P#p}"}9* @ @ @z-0%ά"E&F4 @>\ @ @hQ10@D#@ @ @ @ @ @ @ @ @ @ @z%Amt @ @ @0zzJGLuY{FG#?E4 @ @ @,rȳ,e, A՟\~x9 @ @ @+06nwXHm>t+pVrVdF @ @ 0}#R[]pEWC ߳#?4Ќ @ @ !>R[]7 G⁵?Pw @ @ @z/IH}3>Y!XVmE?|4ZD#@ @ @F/_<-TןEֈh D?<8tL @ @ @JGtj6hѡڟ\cuR @ @ @&E xZSݾ!j(;Do2tF @ @ ~ϭ-Tק#=|.Mt0_\巇{ @ @ j73cSy֔'ƨjNM9R"@ @ @>=oDVhFrY-F+4z= E @ @ @%^{r&dm1䚳UEfEO5 @ @ @JqVm!r^lo,ruzln{q h @ @ @,01N;R-82_:n_UD#@ @ @A`8y8LvA"Zd+`# @ @ @kąEd܈F>R}ɋZ  @ @ @] >/Ehr=h&0< @ @ @@FiT\WhZI`B H]Soc%@ @ @x]kʝql.v@s ú+9I @ @hU}K_F*<hZQM1} @ @hNnFryFgc#+G4.n\i{}hu4'@ @ @C-qa$"gw3"OEP"F4 l㩑~W3qE[#@ @ @Lbm0c5Uϯ.c/ht-kQS._m!@ @ @ Dk~U]npN[ηI@kġ_EZj @ @ @)KznQ!M5#﫫?Ș ;|#GoUݭE @ @^/<=ꗷƱ^lF xe\~wv  @ @ @C/Z!@"[7x @ @cݥ2cY-0Vwƾ̜9 @J@!r\L @4qv7DS_ bՑ#Yx|" 0nuͭ*p{ @(DG @ @=q< @Fx  @ @ @!Pd @ @ @ j / @ @ @!Pd @ @ @ j / @ @ @!Pd @ @ @ j / @ @ @!Pd @ @ @ j / @ @ @!Pd @ @ @ j / @ @ @!Pd @ @ @ j / @ @R_zz]òo٩YO/rڏs~b^o>Tv;lza_}}  @`d Y[ @ @4}]G]zvAyN+u 'W.rW6,>vZ  @_ bs @ @Ffg/;Ld&@ɂZ7q@2 @ @+p֥3W*w?||qe'wrǀ/fv93ʙ]L=rm*┍JRN:':̈L]^Fn+rEOFLYqL*{qg+?=Sxr~yzG߱Jy+: yҌ(*oo|,jďYVWN8WxF6;zYX!@@ &P  @ @),0`qe(@|jA|gv 豙-z|k˯2nL[9%>Yvx 尘aϼtfogw\[z{Qt,}oTVq7rB̚|o̶̔O,_1Cs͖ط/oP&ݱO/Fږ.#etLsEc=:4Bds|FA @ @@,Ŝ Ym+8wIK=Vݻ5k"dʣ˟IY)^:{+[̛_S]5M㵫Ԝu- uQ2Ϲ1vąEܷ+*3/tӭs'cI:׼F+ټ @ @ 0ZuLYnߨc;FmM;g'mNO;+uVoG8ݐy}st gwOf[}rLȎeԢeuwSeƔs[rgJ.q 0h @ @ 0f|9GOtrϕ_rc\~Y'?P>j%g2w}-yy: cWZn0lgG@!y>K#!@ @F})Eyȑ#h|{$ Q5:1m5{˫'5k:ܸ7O.[.S*Vn@*6Yw\93ϞH9]1;5ʺk-|m/l5n2/.=[=FS!r ; 4 MQ @hRO)7?KYR^wp)y`n)z)s>VWrm1K1OR.L)w^W +cK~GK9wqJYwRzt)ۿ}vrӥu]yEԮe`w湘jZ'@N?r݅?cẘ3zыXc-ܨr1,(>krF+YfΖo>sv̈́ri'f:W!/,V ,6Hy/mZD"7L @ @`x <<- /et W _)W}߳cvK95Y\\mRw}mR6ߩƝk.Rvdk?ӎՀ?R_~yssJ9qW|SUE~Rs/+:?%ɢFYdqJEE-kǔ pYF6͂F+`Fd\G @ ^_*Gt>kϘJEtz̄5cfόJv})ڧy0}Ӣkre1JqrrեlbcK;z`]n-KJt}.QR1kt @@ (Do @ @` 9 :](xQ[uo){Y=+3$VW>=3΋yyɵ_楬pYnm=j yΗn/esחsQ2_c);+4 @-- @ @a.09/mwrM:{M^|u'#(Elgz+V,wzSς7Ć^8, T:"}v36&پN{$̪i7o 1ZgZwZ|Ts˴K|m]ݘ?ס^g fElڇ  oƼ @@@@ {Xo F,6i5ZN4kэW=%5*αf ZgTʩDi5Z-Զ~dxih>f?gE@@P(D+y"   ^tk{ K9(ZB౺ƻ鲩D̖jlxFG=ua1  q(Dea!   dǙu =xF#kI:٬}tw'u_ǞeușmץZu;5ijd̞>K@,'ea @!i    m:mW¬{Ud܏1xe;kFM~T4pxCt$'65`_kmhvfv u53}gޑ D)b};!iXo%% q @BY2   @. \Ҫ^xٜf견1q'W]uΟf癵- M-"zp4~m͛nHj"_uP?{_=ol:  d@oYS   +E?OuZvfy]`zMT|)퉵@ |Bs Vnm׽]ڻuY~̙n8>f͘6N=):RٽCf5g[aȚ4.maʊ;w%_G}/-7mlںul,D(LNC@@@@5^hO&-h)b0/(P[ᡗ̵ͺ6=Z؟9NߘY{KTt4;aְ~Qg ",]VjG\>^5v ;dfWOӗ, @! 0"_}@@@@rVgGnXnW/r ah$/SϱSe1y!"~X+y8v Ao&v9"G9o}kbϼ9L\jO]v|:Gd z}]rB`7<"#" #    @ ,^Rb_߱"iWodlxBbEv.fkԱ5qּy`ya1huo-Q~Yfk۲#2cuh]ׂQ<"#" @@@@@ &O_uё)4WluFf" ~ӗmQOe&L]fsX'lJ @&RIrwS#H)3    @Ӳ܅#%?Ԉ.kַk/'21'Zlռ["|L12_EJWvY, @BB`@GvJ[#?΁=ۿx8a:W֚ɧ9Og eS%ͯ~vڤl+_*+4@@@@[4)>o+V*yo~d^V\&]ڌ+.Gͳu6X|J13ݻ60څwMdžϊYI@BhD(}1@4}rVxm\uʛʻJ~DEģDm#ŝWQ;X%Y-NgHuś~u/DC@@@@j~p+vM/ؓ#fV6~zA϶sob&rOGεϾ_d'hamW:倖䫳cZ[ߖ k_~e@ Z,%sSH H*eLẃwtP5f.,́IEs{c+eCַ{GxZI!R4@@@@#p-m{T<:qu[>}6u{UYq"sPHaW<~؇~.o]|wO~_ʾ]FW+ YЇzbcGyˤzo?WA?.Q(%;K]VP)J0|Nm S2B\Jx/ z!i%h{jwS^P¡/[QK]N>uZF1O$ҏݵoG)*Eʜjh[PD7`=@@@@X%PN]pL;6E֡Q`TgqkȽ g2봯gE17U+k*'+(^rr򼒬vvrⅸJ=ŋ^CbPF(*O)39:4=>SҶ)ʃмXqxm- Ut^˳-~쥕_W)O(x7SV#:Oe}/T)o*4@@@@XִqԶ5[5[{a{@|V'}_̉`Zь )> y+3mo*> 5ŋ}%|Rh^ܲ}ʦ/Bz=]OXы{U( aˑ_mlk_ǵ"1l\Qo9%ZGKܲQ[a_@@@@@@<TyB̹6дf󑅣ʖ{qq([uŮC劏~T^V >n/BzQVU}EHM>FhڋvQ{^j򧲮dś6QPP&(A{Eu[eMtתR7R mPoEʜy{YWZ1h ^]x      G~yv9:>VyJYcW/7p7EE/dy1 ^pb[< &Gh~prτ/25T()/dzނbXt.z `:(`3 =9O1˃ɉAˍzҋ>zr/U胵*{}h;Y>zj%(j25@!:[e?f׼1xh ;x b^m~SvMdeA@@@@@-/V} Bڗm^t,c9/uQ|1R ч__9؂eXyʋ_e,UV6S#B=l3*+nnjA 'DE:epHNd묫͕ F@@@@@]W6]xa Xw*Pқ}qV4 |w?/BQ)(#)7mJ:Gs~x"Fe7ST|77JIWqw*V*kɞʭ[Qя-Z%Q9WZ*U^ʶYOz6h^>R)R      ?"K*9qe=/n=xrϨ]\x1/@yvRSSŊWY_t^7_Qn>e:Uh-x@R_/.zRXuȾcyrD}xQ=e+jcĖ voPe-~x!*޾S[`DwUv]4/B\<,@@@@@@ **DMlP+^m^_Y[K.P QfEjGvT~W(vH8>o[DQJ=e]7 A:NP5`ߚыi|%6U+R5GQhؖh?FΊD|ݪ^`7g<#      @ TTFx͋^Dm^ 7/ />xtly6/`zq&m6 я.TZǮ4      @ Tue@@@@@@@6"kǶ       WBd\"      @mym@@@@ Ggé U 渪Wb @ Ls      CP   @iiĢ毿j-Z{oj*Z5>|xz_?II~oo-[zalIIlذa!㎕nJyxH#"@@@OڠAlƌpB1b-]4+!9wHdVv4NkMTC {7m6k,?+ugV i37ްCF|{TX    : |lhޟO* 'W6R euZצaOl-@@@@@rU`=uk%ZvfTYUY$2I ^EծlT¾l\=)     P}I?}n#wZ@ 'Szl*埊 ZzY@ WP?SN`HvRz lYڭ_ 3 !    :xA[<9 @{ _"4O3[+.d:>~eB(cz3 ICR~ݮ\~ߘ}%Ju/[0Uz LgVr'E;G PQRU)4@ :$WdjYJxt^UYd/maN1n኏)f9 i`Dd9Eȼ|Y9)r^?ߟh BG\lg"h ,F+"R-rBRjSu=GC}ߨxpE%tIsgFBҭ 5o]53z nS庬)XSJN@ M`CKJ[˵yBCITڡˁ"~ xQrRG ZoMeh i ip P!b݂  7L"4-G)!,K} UhY@4 0"2M G@ץ*׼nIs/F  }Oz9  PhJ_%_0G@|xL'Y]Rߪ(xGN{zn|<@2 @!2@@@ '8s,BE/=}yLǯ:m6ڧ!dHBd9,gUP'| |9Y@@ |s\ {B˙E@ [j1!ՠ P" @@@T/$TZ,   PN3    Pk!J(B    0"rE@@@ZdQz9GleIX  &HX   @A lKoJ"h u([p d,   i{>~:<Z, S|Vhfʳ[[!$ABd   9*PGQ:4G G$>U6*;"=R6Ca G((g*~ @-(D@@@QVkʥqHHICr]NvyId> 3[tڱ ;h~BZ4@6"kǶ   궏'}Y:ZvU,*<:GB}LC+@j @!hl   @ VawS , 3UO>z8C9UC!\2@8ϱ@  U4   y"9nQQ} ? -gUwfl/L>i$P\,Tb_A/qB@Yk    mto*pW R`A  Ќ/MMC"z‹C+tʁ"T"@!B2-0X˯ە&kon=)2?o6M8ǛXb·mZa{| yx @@ 7R{X((B1@wu3P}2#R U>=L/+ -g@ JQJn5u~͎9|Ow۩gˮvkִ}6a;υC+ވg@@rE:D<Z, ~9Mt憡yvRC2[JMzYBCDn% @ |;= =olxgئ;`w?8N9wƚ  $_΋өZv2-s,BrMw_d|tvR7k'g(_++O?F+( @FDAa ;{@]9+vδz" O=aUkZ4ofn۸7`   @.S'V!}B!@^\1-Y/sO`?%|i;ftR Y-@&4:oc];6[&uͺo`ym+_|}q˘@@mn/Ҽ_|ey9f@\Kgƶ%9]9M{Hvʷ׼_uh9 J @ qV\gG-[}N<*R\|킿lG#zuY=9m ڈ7GA=0g  bzq=yE @>\VShR`v\)?*)4@ @6 >_ֵGڲ,xw󯲹XkF57  T/(j٥ʊ8ϱ't2KNW) ,0 @ $@!2, Phls.庑=~M}nF" *:7_u 5F{n"@@ h @"+a1 PNa^r6a AsD@rRz}R?5uh9   @(D"hܸQ@_|/Xhj~~؈sd#{v=߼YS;׾1  q/<ޥ'ojٱ8ϱ@@Hȴs@@q7*.owd{캃x+D~}xj;V @Ȩ:_u8I.WJ<"@@@ #"3A@]vy*j&~U%~r^zưVǛxCC@z!#c͜Z'GqyZҸ4H@⣼H# SBd:sXўm@@@ /YXfU -g61O= oy l* P=fZ ſyǩʋ>CC OyQF*4@ _/OD@@@ ^VI/Gp5-V)ϴ]^ہo@ 1>OnUP V_?<hs\WX@۵ iʾ N,**X=-]^z%;vuzi&ǙH@IDAT5k :̙c첋m}&yפ/l  @A9]AĶ53L.vK땾 B-\(tV8iJs4|h䲀Lostw(efr ni !kv EH!=Q}VgjUKkuVEH?3Ѕ[EZ旣+VQ-/ѾBgT/)(EH_WC!9)@!2'_i_<'z t??QGC@@UX8Z =lLEaT-{]By"tR{+NxZ̞x?dȬ}irckT[lc$qR+?M4[Ϯ@@@d t.Ops_IZ&{_'Mబ@ Pb{[w%V]i\`/S|ȭk s"s%xRu+ ZȠ-ЄZ'U?=OQZ(Š~ >@@@d %C/BX}-=O$pDI`7@ xq/=YUYUz)4j;G \$~t xyˤ_N|вk4?!,ٳiC;G)y ?* yD@@(8<45zyZ>ӡp=+Y(X ,Q)$KVC 6VG Wt"w+Z @VP̊!:z2JyBi?ͧ2  /.Y^8O4@@@ Aɻy@EEW{X=A'*Y*ٌ(8[Pj?n56gUrZzW?Y&6Ҋ$2! @!2^ÖeIO3VJlS>22]<?f   @JW~e܎k.6]gs_6(r!@y4{\EU&JT&+ _sv٭ui[Ȉ@݌@+u|P[hвg5^hYg^+ƚ>*AE?G@@FP&6zQIYTMo(=C[2 oZVMZm4DUYZ 9(_}h`u|rz7]F`Dd8a&,vJ!2$VA@@ ܮ=%V#5AKqoYX@A \ߠ D&#jxH6C ~Yb en zs _a@ "SoOGH矀32~=u/B^ASvk' P+ROMX6@@~Dz|Q9A3ΛE Ò/ PH!ͯ 8yz2TyMIZ"{߳헚 St8v@ 3/TQI۰ &WQ8@[Nc#v|x@\D{:YK/KnMϒ` 7V)l}YQKoxqr@eN4 *8RO^^hTђ(4@~JߘOPxIC@ ?˔x?*[9OC%mCC i0A@/Q˩_O?Sh-!@M$%{_n ?έq@C IʹR_u@yt?ચ_SEK|O#  !°6ٻ\R5Tɲߊ=W9T 5U/jօ'@!^󚜱ݼ :')~ .y&:_*:A! F@mInA>__|=|x'k{.8#8b/6Pe*@!2__WWY;++x>0z◒0J!2 D@@@@@[0gʉFݛS>S8FJp*iOYh^-*5ainJ_SQup4D9Z\As/(/zѷ=B9_YO9X/)rg3 @@@@Y 9q!M^yW2UtLw*~)V7tKo~9{)     @(D:꬞SٕSERo*e{m[{J_śhDtR)g)+|QZv0,ާy)>9s"ы*(?*)*o7*+LJ?%(Bj2Җ髛\Yezg-w2CSYxQ|>ۆJo5@ŋeEʓ/ƮfvUVnQvQerR9hkxAЛ!Ke?GVne {e^8w\FOTݦSۆi&(N>ݕUNUVmp)^B!   y-pq()yrrBp xB́hG=5=l6Q)^re#UvtrEHoʿD ^B\/[}_^~wTv-s|cP ZCB_+ @==' @Ss:^8Kzf ^ۜ(y {*^¡ F2fdS91YϽx̏\5sB?Lʷ>j^LG@y9/Dƶz/s{EHo}h?+VNgT*5i    @V/?%+ەߟg|qd+fFUKƁlR+ SV"@!2[^ $࿾ R.W|D7/\>j+ >'))Ƚ?5-rt._=];2V}oÝqQͷ ?_:ZGkzUf(kHL   dg?.e˽X?m[^GCtGO>ʬHb:Is&vk.fXBd8t @@i{ӵ>&4,1AWYo+su_X<|}OάIk-0un$]92lٛ4#iVQ@t;\>ۭvS;٧/ֵK{]yo w%޶ܰ=~eK~ާ˵ֽKwƚ q     5ri5nGϘ}ϡ}6إ-Е:l.~T~d͔@s_20mل^V׏#tLpT d= fwq{fî6;KxϚ#@ l^C/ےe%_GF-uX+kԠZސ?龐ϼ9;2jڽCfF>am׽Ki^v۲|b"{m_i ;5N)   @! (hzUͦy"PDsCTo?hVѽAfS~ҍ6Yy ɬMgzH/^ϙYqN#@uvkڕN5:qr}+RݛYD]tludgd5uֻ%aw iG|j9b>۳ҏ rEBdR@@{R*vl3v>~ՙ}P?eK:t3;>~MiFyѐ[2;b@t>kf~a.J+\T/^u"ko"N!~yqf{YuVk?/aݨias 2;_ܾ[4?6y2[Wb*{_2c5Wdm[FKn-Z\PJOuZϝo rO ] M@@@rM͆-?^Izh fGuc6KEI5leu t "5]>i㾊η\~9s-|t~KgCrOzuSqSNulWM'@!^s@@HqfjFy8z]Ct"6lK$uXSuϽoN/9[4/GHC"tꏰR>-; q1zϹ`pD/@ު5~&}93@I 9i@@@ ~l.a/Gi>|JT    sl]n]W7?oMӈEߘH{nXtI_>4ۼJi#˶hv?mo;yK=p9S@j Y9@@@ OZt/'JL_]ޑ>zx:4+ܟ]bݷ,EIZ =痪~dlV9%'"W0 (9<   @!t*z~0[kUK~._8ʬMgcn}61{y!pH{n?EȫW37\fc0hy_ d#"䅠   @^ tl͆5[lv^լef G#}y0:j򇑺TuQ7yM%Y ]Y;}O`hbOe^zo]t =c_nv.#gN׫uoH):(_9/R N.?wY-2*@!2@@ccgrي*yB.k&kZ[uyS^o{עJ&}dU띎L6hsB*{hf_4lhԵ6f'odIsF WfqdRBd&96   g?]운2;wef3ƛK&khom}͖,1̓hGK C 9/ Zo8gwf3[>Ѹy@Ȭst@@@3k^.cֶs55﹚ʱ]MmAM{v @B<þuի[duolnc]9W G,sVƝة;E{yvv͙gf?.&}[عGԢT`]4B[vܴ]plkֱAUBd2W1ZU@@@@@ >z|DkլGs6k3z}b{֤Qqm`xemec&NzG[5VYsWw/O-6hh'¼8]>5J'=E CޝkU|um6` rBBdNLt@@@@@ tkXȆݰ.Zo E#G}`(ǠWO>'RL#&8DWaEgۡ5 VTjn7|"ysԞ컶u y%mڮ ?}ޕN Nr6rHl2?%Im6jԨH|:-|Rut#UfӦM~۾ ̹$BIk~@@@@@ /)~YlteZګu]6oz!q";Eb Z5؉8k^<~I821Yc}kٴfϏ/lb[|~$Sf,۟4FDfK"C&L8x`b)SؑG^x~nƍg{.n|fnxd3T-Z}Q ~~5j     QӗGߩ}ˠ6WluFf"R|p?ɉFϵmQncxUStmNKmSh]4rlh9     @ZآI|[bUp{s&bl6f]aU/gumne춱ݻ60څwMdžϊ]ir :WN uǿ_\Zl wSum~ag[h!    8V6evMO۽Cg]gV6~zۧo̶wFϷsnl}Ȏ۷i8Sh&ڰsg}yf08*H 55%(W.<͟ZlLթܮ:4r2=     @N?-^Zbj⻟GG9N}'hWڷk[.w]ПYZǵUⲡSgwKjF})ֺwiX<'@!2?t۔*rked*w(|x ,     d.>W vV+^LFUMزlQeo+/^~Dޏ`@@@@@H`ݼT66=#&ʚr;ex I㴟j/Ʒ-Ė-f˅]z!vӼ&>      @J(D֞PbݜbvY),Jpe6T;}]ߦ[}ckЋ@)(H\FEd/      @(D6={5Z[KhnĶٚ9PN1-_|]ߦ[6 [0gkٻJ{e%hm5qm0#      N Ӿ\w G"}C}mBS9D5v^uۜ[:Y6+ʱJX}1|n^y@@@@@@ "kNMza(^kuBOh1婞}_5:.-ꐭfQ=B>QE@@@@@@tP]G?V~)G) Z9E?~Y'*x$N*@NQ|m"m'tR"+      , 5s#mrG56ݺbq{؟uhrju/HkIdEA@@@@@!@!~ج}Qi[mR촲*,̥r76     Hn*܍/鯣m+(k}26ܫ4 l9jꗎ-Z˕.gQvt-D@K~cyqR+{.{_||}e9/~s\0# 3KuV5ik50I,~^/_VF0Mh.ZwBk      P  Ҫ$5giGd#ODx[Vn,ѾR2*+궛AM =# dHtba&_ϫ3'&_+'TUtצ F A-%xm2!sVBdb/#U#kMׇIpŋHlt}bl4Cݽ_dTI{2s#  R@9?1kM v]KⲰ0K!F ?[d|!2RjUi{{(ulmWc4j\1kS魼,R92=BC@@@@@Ȑ@X=wRBNŧiU YQ<%^1ZNC@@@@@Ȑ:nlgߠl$h'Zl>XyJľ< @@@@@@ =u@@@@@@(p @@@@@@HTO@@@@@@ \Bd8}@@@@@@R!@!2@@@@@@YoN@@@@@@TPL*D@@@@@@(DG@@@@@@ "S>@@@@@@(p @@@@@@HTO@@@@@@ \Bd8}@@@@@@R!@!2@@@@@@YoN@@@@@@TPL*D@@@@@@(DG@@@ٻ(O ${DT*AQyy"콡P8EPPJ % $$M/[}~ϛd>yg@@@!@!l@@@@@@_>      U      @ P/N@@@@@@P*D@@@@@@ (DF      ?(DCm"      "# G@@@@@@"6@@@@@@p #      Xl4mZD^YVffpx @@@@@J,B@@@@@@OoȌvZ|@|)P}@x     )Bdp)yh mOyژߌW                                                                                                                                                                                                                                   Y}@IDAT                   @T    @H/=4B8@H3łw0]B߉    S@ u EȜ,!O Y{*@!2\{B@@@B @hzAtG@D͙p"       2!s$    BЎ@ 0:?`Ddu6      @(DJ       A"#9U@@@@@@%@!2P@@@@@@Aͩ"      ( f?      DlN@@@@@@@ P 4A@@@@@@ (DFPgs      JBd      $@!2:SE@@@@@@ P"%~@@@@@@  ٜ*      (i      @ PT@@@@@@@I@@@@@@"H 6ΕSE@@@? 8WTT{F~ElFFFO{InPXBB/<$G(' !T߽FDf0   % gX^)%Ӟ> `~{O r%y>K%P=0InP𥳐    Pd+G* ]*rI}maS*ǑS'8sb*^ȹPȩPc^޳yP@@@@@@(,% /G@@@wx<0_f'B #_OBOClxQ5|\vAerAOF}لJ}X~0"Th       P"˔!   @Y L!_Wf?S֛ε__{s=@ݦ]Y?lw{tW}erqXTY<>seО4Y=jM>X֛ e    s[}2kt  E_& _^*]G}P+@@@@@@\BdA@@@( 33F'Mެ,ùM,~}>i|Xq|d9ΞKdhAe]瘹G;ɵ8j}ED~PdN@@@Owf ĸjrݧkdaIKO;_Ui o{^sA:7T~\3^n{r![ ?KJIb3gݚ MYc@J(pn 4v4Vo~yUjXpo4}55n\'kX6Ϛzfk5휩gڀ2RZt@}m>9Ӑ3W7HܽRnv*iU}peHJq ]ֱ߳"K>u؇Ktz]PTm_hgmab!   AX{ _pDEEIj ɲxTR}t<,?2=y3vju9)2IR1ǿ/VϐE)VW3=^sӡ2lұqӺ :7U: %X{_sKMq)ѐ: cۼȱ:=՞жZ󆦛MۼH?P3Ns~uVԴvFPvI3Qct&B۞5Mrm߿:l*c/ѐr$ԫ >N*&f4W6N嬳{u-Q.T6?7oIIm>i~y-QVO#(Sch$:UHG>913kL{Ủ"   zS(Gt$C0;x _z"itmpL4㯟HFsYEH{KZ!?ޒ5_Zڷ"?:h}>ٲQA]\K^V#~J,   K'KEoKLtuKmbRz'MΖ_1 絸Ho-|B첯Wɹ)TiRm/vv.j;{FZg^W|$u4)_ޝMw3>9N#Ws-LOZ;A&_@%^/'Aa/tzs>;QWQwB%O/ꅞ+@[VY6dT^Rv[/[:Y4ͻͱOGWB8+Νǹ!   @H |{\ыv E:bCYһ><aߓ+F-PbIŘ2Jr/?&Z|{VZ$ҢdOz<4NwhMK?K.*3GM Ⴥo[Fȋ=uˮ;䳥0Bj?IHuG_/";nO.'x]>֟o;?(& S Lph4S~v9^K-7{uho)4ւn"-yY̊!454?i4v{њ5jXV+JDp;wt;iy]羍?_._lEKTVEd>:36˽y[g;Kyek '# Ucd-ʘ~i:'VS߱*owpE?aˆȈj@@@ idΦXgsWPI^y'6ĸjr_vXf}pyʺk~#[ɯ7.-ZlۭSTKZBLuY6d6D[>WNuYCyV^¬'%6:Vt,XR/unK"g=|J5֚ilP]շ ,nimj&|ufNM=MۼHvl19.bdbX{8K핣6[ 4=kRrÊHy{/\El>, *J\Rvy*B\0K"o:W#79c@@@t3btk$$uPXfEzU"(ض۪FEHH~F#Ug]-N^7QnrgBi[4SG"ێ[l/oI&nӢ=9$U[R̓MZ{ i߷L6]Z54ws]u}֔AFx'2Ư¢ 9ɢ5m޷I^!@ԆEv۱t$Srl_OwhJKvd+du˲߷Rf{$DK{HB t+8zrҳ-Tmtdϧ_Ac_]e͉z6_!ۇmƻ>DM_7zQcgQM.?Y&6hT6+WknmIm*(BѾr.Wb.a}tM/):S&-WKe"F@@@@ T{2p;c?l#*8m]rEkmo;O#&-w;n؊W#/\‘JJuRsj}*܌'t |O/\.6Ǔ.ܭaG45huiy]f-ĬB5Wjl}Cݚ5WivhOj-}I b8:BY `,/'-#8oNaYȈ^+\bXYFitv;D%)Z[dt|~i1vnyO^~ٛ1y;S+`h{4vfS>8R c[d=+$JiԶRl{?bXUՕm̢/4k4oaB!_l@@@J!pM–<agZˤBG8EԴTt9[/_?g;rª1:~ڗ.w'>V8"6Ӓ=oduYCQϗb>xh%K9祶xF}hlXw(f}R[al m S&F6-nV<Jv2m;{@6]/-*p$WS(*rvt<}s)MX:UibC3n$hƌ##!:EH[n?z\RҶC+.4״ӄ|3=uS/e-m5i^50%no+/%[f{]F eRàUs<{?L68N/v>1+JZd>{0^o     T*H;%$sU{b5NqbmvȨ2$6tslna:K};7>y&"Kq 嚫5 5y_V(I >٢46&$^j5['<+'h^yE3MSfsAۇoa'f#-wj.[c+'h>г5Ai"suE_.}Uɫulޥ8]+deK>.|!Yϫ!vEZ%6!ڙ^7nlK"\&ؠ4c5g-55Wh׬H'6*sueff&,ڑ2u풺^9A:P]TӟH7~M/ϣ= kl]Q!хhBٝȲdK    PLSYOit(bnLVwx##C,}ssWʺ'ˋ2mȒ(ʶ%ه6&bAڜgS3v _kON+>;"^/V`% z$IǚHj'6rD;ŕeέ?/w37䧚s4hFywF޽IqfȰ"?dW[5UemKܯKkש[yJӖ6}5dKK=C!Ԅl@@@J*`tEQt3%~ݡ#NU;K.@Gl')# X-gp{T{H^F5) =ZRR寽zr(PH溻sBA0DOlس^GJ )أdZ\;47rw]=$+T*g 7$-5]l*$Ru={ұ$6~ЊWkFxvO}1p&)Dzṽ+`Z"dHWr[_F4_V|%HPxK]&)'KZΏW#   @ ޜ]Lg9{iδ䒯Δ+Fp^үzJGmdԿv$<@6>X+Iw6&X-ܕ9f.(I"cERˬ1?ݪ?~5Nl*4Y$>Wv?Ȥ֕U~=yܦIs<95VbC `GIдXLՖzf۶+{/w LO*kNwBԕ-3w3J&q guUE߷RQjXU@6K~Oܖ&|c jmݳ.~8]?^/j.Nf '+4_kpt .4    P+ jwFLш n}ore>r!',Y\yO9]SW3[.2X-45][Q6Jһ3kVkv-}>iy}9V-cz& u+ȥeʼ'y{bXYFitv;S\߻2ˇo_j! Y9{/֙\-_ 3ĹmrȜH Wyf?Ճ)D- @@@BK .&^>it9_?g6aQD?L;!m .S4k-#p_B_8RXъh#k.dgEN vh6kl;5[8]vA -@L\S<"u=7;k]imm"^2_0kk\9Sza zTgnU[7c^d^9YsH?Y@@@(@R=8EƉNJŷm޷Iv72\KHC@hǑyUhEBoۤ y}RPLX-FX>uA'hUK| ΢kBN׳ITUuQΣOBb)5>X:=X=g,zy&bfy7^}oU4lq)7,`km @@@P3TLN}Y݉q仕iT$HH@@ $E23qtt(>ϯ%5U}ُ5h:i>dP'hy ,Bs 2 mՓ:ѷU+GmuiUU9Y4X5giɲ`BGmXQϯ!M%vy:zIG +V|?Qɾ;w>\D+/7w7]ƻW-kRzWF>ZƓ}쥙i$32?39e>ES5?k(B*B[\RTm')sJ$M%'R˷/y{$]fyb96]h#K՚/<ˊ0L_`?FNXMw>4رUuQPSAB}r ]_sbիˢF,4@@@BNMCDt+)7sYzILttkC.z[b䌦jrg!w. x5inXyM#SΚ5V;d-ȧ3>Գ}H "E+P؇/w-6sdP%FWC:W/&KlB3nv8et}DW]iqy-ZUΨɆg&ڱۜQϫ. 3Gbx`?W]Us1BdM7[l6X/i+Znݺ$]WAR,-`>ڨ6̗Y.j6x45?44@@@ʥ@ގ'/zR&#c{3'=!]8JV$.&^vGޙԅ@@ `S4c5V:CcÎa3u>u:֮kj< tHPU5oǴ}Gd A$XQF@zdxŕe *Me=+Ċ(V}8˼^2/|xmޟl|.vVz׳Ιhǁ5]44֬BNfƊf"Mfͥ4lOmkx h9)m;M7` qƎ`g^|./x0\MMm-ek_ќ^3WSP-44nR'vXS7M}gIf㨭yC-m44[c^O3ILxD@@@Pk?ͳۄkfdxB;?$y:\r˰ޣ%5-Uڛ,MrYr\ǒc  @Rn]b=ۻ}nTHSI_OXhEdZ\]odT{HVV3W0Nz}AR֩(Nb+\w H/iZѾ1?kaWLZ n5 ќB f>FZ;Ma$ ٤Y9ZG'$T:֚͛5560D4 ,|똋r<VŚ^5ḳƌh@iJŚڱ'rs_qT8klq1N46**͗&j4@@@'(35/-Zf`L@@  ~{Ϭ:XՀPXPV(z¦VBr_!~.ѬCcB+bY ͳ+VFk&ifᚾB{:?_sEƬ٬5 tָuDm+HhzhieTjv.4Q翖dM^O,vw:4whߚO&jf}gf>Xj# @@@@@@ R'I9ъ55v 4nAkYf%u7y&.G+ޕ}e:m^˼(󔾶flk.Xs]\vjjf@X x®5w;syEvV4&F?ڥ^fUi~@@@.f'B  ɛ sL ~ A> ѿ_#j]JJñ'eI).klTI4vN+DڥH 6Iݝ)c3]NJo=ч mmVdn3:}tIu<"Y6ҩ [׻czgY3}Q ohPhرyuV?^-?(]0M1   @hԪU+4@@@ b 8_,B>iMXQ<͉!fo-Ns `VT{5hҶ XqͻysQx,zhh8Ҭnlw뱰®4rrs^)qبL+F6{} L}"2!  D@FFFrTTTH8עye=Cd '2EJO<%G(' 9>*s/v NaٳyNM֥6tYar^M.Zf՚4hjFx첫VfƊGkh" kܙ|?V9g+e]mDDMQw]x&i자EiE1)vX@@@ C~O(Кyȱ/I(|5<$G/Ad[}*=8B_85E+HBq׺t–ii6C'~vgOhŏ<:fu0oҬز_4 554vi۾5_jlKGM~ͶgV饹Vcj}Ic(aR^c#)oLزkVh©w;]gl_ 3\@@@@@ 2O" !.$ P@}]~Y!Ȋo,w1%umt +|D?5TxMWMQZ~Ŷ l8c-xl$mb|Jg^k͊4z:g>'j5үИ9[(mܥq[AH+Z+$s-E@@@?)D(ē lBY\:QwfN7أwB(ɒD}d㻝{)|A&E!   |I#!%{QHu@ 7    @_4|F 4 x/;1;@        Pj &d       +@!Wy@@@@@@(R@@@@@@+<      ZBd        Pa@@@@@@J-@!Ԅl@@@@@@|(D0      YjB6      "}EG@@@@@@R P,5!@@@@@@@_ "#      @bK6    P* ? @@`Ddx'g      @HP n @@@@ #|9]@})C P,]ǁ#   @9·9t@ ޗh                                                                 P:+0p^QQQiYfYFFFg/e d-ij@{^    @@t$S(IEH_+9Gb_s~W^6%?=x @@@@"KoX)#!;)#!F@ Tl@@@@d"Kƫ@@@@@@@xS=-,ϋ M ͮo    @>@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@]<\IDAT@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ      >@@@@@@; aץ       !p$it8    %^^T4*Fx!  [ #C%*#"M HMMݻwT,ۻw߿LF@@@"D"dt4  j㌈ ddd$GEE/C.]*_}I׮]+ӳ˘E@@@@@ !0c>>b.c;⵱t^L"     <8.>Gdee9;_7f#0#:{g:Vc9@@@@@@$@!2@t7q/Ҝs/|5eu!yM>q4]ջuיy2      X .O{ZJ3A󅦑t fM 4K[<_]f      % ]R{iο̟oXAmkI7,c MM4@@@@@"[|[]Pk[kx- ݺ4i"=}:L"     P.g*JoӪ r]sl!ΫZKx $4@  @H`Dɚ~_F96s&9,6hsl9v  @H 0"2#(Ezhtjl!pr aiy=\O4@(#V52b͵qE|=kv !&bᄆEit4hY-Mbl v9YZc@Z!n\  Y>H/_O n, -Zsہ)}Ÿ.K@l @(@ z} GZR1.b(ЕjSJm      P#" JrI uf>546ʱ8ezq^   E~EY-lٸqcP㌳?s6@J&Ȓ[ 53H}f\1Qs)Y@@@@@@ / y8Ż,͖}\%ci      `r"\yP+L: 1   C`߾}rwbo355U͛';v(kC8wp90@ (DSo͹ .|Wgϧэ)잔NF@@@0^ا-M6ɻ++W,݆B8cp90@ (DC/96hemd's*<      P\⾀^ ]]IMhgfWׇ55ks!   {ʐ!CdѢEbڴi#}Dx-[&cǎKZZԩSGz-]tq1b\*iժ|ζ6m*&MKի'W_}qɬY׏5J&N(׿8q90@ XW!nad "3=5z:|Xn9ce6"rذaNuE>D+80@:v輦VZ2fg;v4iD,Y;?^ '8F pŲ;ʺ)+.gOc?-//=!_c4E X뾽floLw!gd}̫хvTҊzV|Z󅦤 v<5{u:ͳrQQy6CQx@@@ t9iKN>)"'p(۷KrrsGqzt-33~?!΁7q{A>'`vi4rzMS!=4v㋷5I!Rh{n!NX4Mt &e>]*/8 séI@@Hp:DEe_tt?޹]mYIZ~RFBq.β-\!+:pӉM35鞋=՜Y@ Tݫ"fcVxK&jhfu5iXar^;WcYX{Vcw(554/j*jʪ٥`t ݗkjFivjҰknX+fn@@[=zL4Ild?.sǽ7daE8+g#^@ʛ#"[x:-VQZ9].'y;=%B]?ଈhEc4kI{&]3L{VDw?:)'ijiԴo+}ݾ ݉C믗?l:VlӦ,Zȉ]6>焳+Q6ngWl]_*ή ITٔc4hª9 r'l}^c#"mtbu!M/i'555vGk34DlDW+6с4WklDdo^൨<LQ$W1D2uU4/e/MbQ7]W dFjTʪ[^Zh BZk{Z<\ g߇g`Μys23{'3(trPĤ47B]Ppv&M|QCvտktį}E9A*)/)W@@ZQ`E_M?ZϭKcn޼9XjUv.2曙euvvv^j[lY8 -S [-aYD@<YÛ@eT\<\VqmQ|PGͯo--nXjU%ck yym~7^G͛3g.򾌈 _'"    P~;S.y_=0aBp衇fvAzk.^g/X Xn]pg j?/Bp!DrΜ9_`!9 x"     @Օ~r̯b=ĚL-[ 0[!lԨQ|嗛jkw9K. 3nׯLr˗/ϼnsϭܠ @<Y$#   ۷opYgek>hR9裃;,wttdf v1-S3Lյu5kdׯ_u3e ~{po}ʷA@(D.    #w qֱPuܾo+W~RV9֯tK*7(#! P@@@@9`̙U??:ths^hj̘1ի<;w1oj|Vqx?{\p渥\ "q    شiS0{ࡇ L8X%`sqg{W_y0XEvÃ%Klݻ3yn͝;7pOޭ@.c#@KtGu3=[₹Hhx}rœ'  P+ f.@|ɵ:?u{.7n\Ug>ː*w/`ƍ:ujpQG~x~@H[BdŒ @2׈@ #ix:g TW`A߾}=X] Fѭ\SL 9`]v |`ڵѣKzzŭ_z؞@.c#@$diS3@,tttd^eyw8كxLq֬Y1 s9J hkӻRh @Bd 9$    Plb O9`رի>} (o?mڴ@HB P~ǐ_3,}@,}nGC@@~\2dH g S@@@@@@ D$_     @ ˖-+a/v)VbJ܊ b@x"       @ PÛ)!   @ tuuuYҴHsh~4-+͞iZ9vmJ?*۱' "q    P5#x!=l`q"sZ_aŹGZKi;uތ PY!#    @ L<]Ԛ^"p@VV @@@@@@HABd       @ j: 4:99 ^q      z<z+F@@@@@@ uL8]]]mmm{zO<}0+]@@@@@@*S*Zsgƿ[?=ŋ >芠s= kn_\qヸWz9{a{4-Z̟??s/R 9nܸ%O, #D@@@@@ h+d#IE=^7kybŮ؎HbfW8czQg*؀uo#@=Y$ׁ P }oT^ X@@@ @|Fdj'xUMK _}u%ú{~@@@@@@7{U%^)JZԠiC\p ׫ut5h}?4[k)"Ь|Oj;u!"mR`@@(] ^ +}$'0@ܣ\v֜tY+.+{@.@@@@@@!8}u u/WUֱ|o&(W<m“N@@@J :Jߛ=@@@}_9oAVw)mQKYO;V$lB}IoZXO+~U?|NS4@@@rںwuhk ,gE@@2[#ψByJ{+j~.:~1޷[=.Jxcu=u[8gh#7ϕq%    4κĝ\@W?s+l4z%~|b%^Qyn;ϖ̃j< i#3>,@@@@jwrV#xai.erWJ8fާ[=]TLUrZq[WG3GAT[!    PotB.(Մ;C}kEw\p9${Vnf~݌J^SߧnU%05A|Oo    ԣ\TX$ڤgOҟ9II<'?<]%<|SoAXfqoV?HeghV 5p ,dz@@@@gO¢O8}U}+'6+;(j@z[Zԣ.:&y?@I*:_~Z t(DC"@ߓ]E@@@@^:Л omtI}ޖlvn_I_sfB)DVٜ!@n'a     (pN*^ ʵ\~ж Գa:Ϳ)?vYUK@7Bd9{R^"6@@@@@nי x'ia5> mw3?8GIsJviUC@7Bd59$@@@@xXg/$-ov3sX9{mh37*I_{']m\"Ff|(XIS!    5/߻gz9ogt/)})OZO!J =)[     Pkt\ zJ_ Sz ԫY?Jx }LzM,c"@I|O*@@@@u:ZEvwUL{$S^;S}YYB:i  DϑKfOW{:-ȟWyg:ZOz4;XyAS- ޚ @@@-4IMN-K:;4NB0  ,@!2MU^L[$8J-̄'%lֲ]I>I}χ'MC`)Dd˰ Pߓ7c@(9QJykZYq1mHgmв 7(s?㧊?x{\,U+n)&e₦Ǧ)   &x\_]qQ PqJr1iG?A\]<\,VVlP˨G(>!  PUŹ,EHC)%JXYJ.@@/׷)c)C|\>e/?)Ô)o)Q.PLWܮWqA+_BiR:}{V8)weʷhVO{T9O)Em=ފSySyFPq~     s AJ -D'{)➋''?\q0e\Pv򘲗r/*B_;ž b+|mdNZ\D۵ZPys~H?]1ͅLO)E?w2]qOX|Yq!RmD.P-|e7brml|e$%K3Բ '+.bx.T~ISr5WVqIOCQnV| %UF(s$-k@@@@@ #@!/@ }BUipWzeB;)W(+G6).D)/*>32s=ouBO = ǼU)w+]n9xw(~BŅ\kV]<7?!\uq|.㽠de "    d(D/Lhq?=ȄatۍZKS§4nTw.z췵|r'.W*7UDp] v澣x<oIOUWufxOR+JؒuL@@@@@$@!/@yW\tQ6ӄBt1*sH     P@"gs@@@@@@@ @*ODNXAw̺{Y!JݷK      ODg      @E ]]]$X1yOq=8f(X5P{      8*DoC-10+N⼼5fś                           PM RIENDB`